In this project, I have used the different deep learning-based algorithm, to detect empty inventory in grocery stores. Usually, when we go to a grocery store, and we see a shelf that doesn’t have the product we need, then many customers will leave without asking the store workers if they have that item. Even if the store had that item in their warehouse. This can cause the store to lose out on potential sales for as long as the inventory remains empty. I have used machine learning models to help stores replenish inventory quickly so that they don’t lose customers and sales.
pip install tensorflow-object-detection-api #installing tensorflow-object-detection-api
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting tensorflow-object-detection-api
Downloading tensorflow_object_detection_api-0.1.1.tar.gz (577 kB)
|████████████████████████████████| 577 kB 13.6 MB/s
Requirement already satisfied: Pillow>=1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (7.1.2)
Requirement already satisfied: Matplotlib>=2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (3.2.2)
Requirement already satisfied: Cython>=0.28.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (0.29.32)
Requirement already satisfied: Protobuf in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (3.17.3)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (4.9.1)
Collecting jupyter
Downloading jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: tensorflow in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (2.9.2)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (0.5.5)
Requirement already satisfied: wheel in /usr/local/lib/python3.7/dist-packages (from tensorflow-object-detection-api) (0.37.1)
Collecting twine
Downloading twine-4.0.1-py3-none-any.whl (36 kB)
Requirement already satisfied: jupyter-console in /usr/local/lib/python3.7/dist-packages (from jupyter->tensorflow-object-detection-api) (6.1.0)
Collecting qtconsole
Downloading qtconsole-5.3.2-py3-none-any.whl (120 kB)
|████████████████████████████████| 120 kB 64.0 MB/s
Requirement already satisfied: ipykernel in /usr/local/lib/python3.7/dist-packages (from jupyter->tensorflow-object-detection-api) (5.3.4)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from jupyter->tensorflow-object-detection-api) (5.6.1)
Requirement already satisfied: notebook in /usr/local/lib/python3.7/dist-packages (from jupyter->tensorflow-object-detection-api) (5.5.0)
Requirement already satisfied: ipywidgets in /usr/local/lib/python3.7/dist-packages (from jupyter->tensorflow-object-detection-api) (7.7.1)
Requirement already satisfied: tornado>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter->tensorflow-object-detection-api) (5.1.1)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter->tensorflow-object-detection-api) (6.1.12)
Requirement already satisfied: ipython>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter->tensorflow-object-detection-api) (7.9.0)
Requirement already satisfied: traitlets>=4.1.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter->tensorflow-object-detection-api) (5.1.1)
Collecting jedi>=0.10
Downloading jedi-0.18.1-py2.py3-none-any.whl (1.6 MB)
|████████████████████████████████| 1.6 MB 54.3 MB/s
Requirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (2.0.10)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (4.4.2)
Requirement already satisfied: backcall in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (0.2.0)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (57.4.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (2.6.1)
Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (4.8.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (0.7.5)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from jedi>=0.10->ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (0.8.3)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (1.15.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=5.0.0->ipykernel->jupyter->tensorflow-object-detection-api) (0.2.5)
Requirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter->tensorflow-object-detection-api) (0.2.0)
Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter->tensorflow-object-detection-api) (3.0.3)
Requirement already satisfied: widgetsnbextension~=3.6.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter->tensorflow-object-detection-api) (3.6.1)
Requirement already satisfied: pyzmq>=17 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (23.2.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (2.11.3)
Requirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (1.8.0)
Requirement already satisfied: nbformat in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (5.7.0)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (0.13.3)
Requirement already satisfied: jupyter-core>=4.4.0 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter->tensorflow-object-detection-api) (4.11.1)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel->jupyter->tensorflow-object-detection-api) (2.8.2)
Requirement already satisfied: ptyprocess in /usr/local/lib/python3.7/dist-packages (from terminado>=0.8.1->notebook->jupyter->tensorflow-object-detection-api) (0.7.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->notebook->jupyter->tensorflow-object-detection-api) (2.0.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from Matplotlib>=2.1->tensorflow-object-detection-api) (0.11.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from Matplotlib>=2.1->tensorflow-object-detection-api) (3.0.9)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from Matplotlib>=2.1->tensorflow-object-detection-api) (1.4.4)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.7/dist-packages (from Matplotlib>=2.1->tensorflow-object-detection-api) (1.21.6)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->Matplotlib>=2.1->tensorflow-object-detection-api) (4.1.1)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (1.5.0)
Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (5.0.1)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (0.4)
Requirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (0.6.0)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter->tensorflow-object-detection-api) (0.7.1)
Requirement already satisfied: importlib-metadata>=3.6 in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook->jupyter->tensorflow-object-detection-api) (4.13.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook->jupyter->tensorflow-object-detection-api) (4.3.3)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook->jupyter->tensorflow-object-detection-api) (2.16.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=3.6->nbformat->notebook->jupyter->tensorflow-object-detection-api) (3.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter->tensorflow-object-detection-api) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter->tensorflow-object-detection-api) (0.18.1)
Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter->tensorflow-object-detection-api) (5.10.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->jupyter->tensorflow-object-detection-api) (0.5.1)
Collecting qtpy>=2.0.1
Downloading QtPy-2.2.1-py3-none-any.whl (82 kB)
|████████████████████████████████| 82 kB 829 kB/s
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from qtpy>=2.0.1->qtconsole->jupyter->tensorflow-object-detection-api) (21.3)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (2.0.1)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (3.1.0)
Requirement already satisfied: keras<2.10.0,>=2.9.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (2.9.0)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (0.2.0)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (0.4.0)
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (14.0.6)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.3.0)
Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.1.2)
Requirement already satisfied: flatbuffers<2,>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.12)
Requirement already satisfied: tensorboard<2.10,>=2.9 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (2.9.1)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.14.1)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.49.1)
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (1.6.3)
Requirement already satisfied: tensorflow-estimator<2.10.0,>=2.9.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (2.9.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (0.27.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow->tensorflow-object-detection-api) (3.3.0)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow->tensorflow-object-detection-api) (1.5.2)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (0.4.6)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (2.23.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (1.8.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (3.4.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (1.35.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (4.9)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (1.3.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (0.4.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (2022.9.24)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (3.0.4)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.10,>=2.9->tensorflow->tensorflow-object-detection-api) (3.2.1)
Collecting requests-toolbelt!=0.9.0,>=0.8.0
Downloading requests_toolbelt-0.10.0-py2.py3-none-any.whl (54 kB)
|████████████████████████████████| 54 kB 3.2 MB/s
Collecting readme-renderer>=35.0
Downloading readme_renderer-37.2-py3-none-any.whl (14 kB)
Collecting rfc3986>=1.4.0
Downloading rfc3986-2.0.0-py2.py3-none-any.whl (31 kB)
Collecting keyring>=15.1
Downloading keyring-23.9.3-py3-none-any.whl (35 kB)
Collecting rich>=12.0.0
Downloading rich-12.6.0-py3-none-any.whl (237 kB)
|████████████████████████████████| 237 kB 57.0 MB/s
Collecting pkginfo>=1.8.1
Downloading pkginfo-1.8.3-py2.py3-none-any.whl (26 kB)
Collecting twine
Downloading twine-4.0.0-py3-none-any.whl (36 kB)
Downloading twine-3.8.0-py3-none-any.whl (36 kB)
Collecting colorama>=0.4.3
Downloading colorama-0.4.5-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: tqdm>=4.14 in /usr/local/lib/python3.7/dist-packages (from twine->tensorflow-object-detection-api) (4.64.1)
Collecting twine
Downloading twine-3.7.1-py3-none-any.whl (35 kB)
Collecting SecretStorage>=3.2
Downloading SecretStorage-3.3.3-py3-none-any.whl (15 kB)
Collecting jeepney>=0.4.2
Downloading jeepney-0.8.0-py3-none-any.whl (48 kB)
|████████████████████████████████| 48 kB 4.9 MB/s
Collecting jaraco.classes
Downloading jaraco.classes-3.2.3-py3-none-any.whl (6.0 kB)
Requirement already satisfied: docutils>=0.13.1 in /usr/local/lib/python3.7/dist-packages (from readme-renderer>=35.0->twine->tensorflow-object-detection-api) (0.17.1)
Collecting cryptography>=2.0
Downloading cryptography-38.0.1-cp36-abi3-manylinux_2_24_x86_64.whl (4.0 MB)
|████████████████████████████████| 4.0 MB 41.8 MB/s
Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=2.0->SecretStorage>=3.2->keyring>=15.1->twine->tensorflow-object-detection-api) (1.15.1)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=2.0->SecretStorage>=3.2->keyring>=15.1->twine->tensorflow-object-detection-api) (2.21)
Requirement already satisfied: more-itertools in /usr/local/lib/python3.7/dist-packages (from jaraco.classes->keyring>=15.1->twine->tensorflow-object-detection-api) (8.14.0)
Building wheels for collected packages: tensorflow-object-detection-api
Building wheel for tensorflow-object-detection-api (setup.py) ... done
Created wheel for tensorflow-object-detection-api: filename=tensorflow_object_detection_api-0.1.1-py3-none-any.whl size=844512 sha256=45d836d6b3e20b2477b92f13f2dafcfe234bfed645172df8cdbe41229e432a13
Stored in directory: /root/.cache/pip/wheels/71/7e/a2/461ab817fbaef68ec9cc60df16d3669d1285f032e4c98179bf
Successfully built tensorflow-object-detection-api
Installing collected packages: jedi, jeepney, cryptography, SecretStorage, qtpy, jaraco.classes, rfc3986, requests-toolbelt, readme-renderer, qtconsole, pkginfo, keyring, colorama, twine, jupyter, tensorflow-object-detection-api
Successfully installed SecretStorage-3.3.3 colorama-0.4.5 cryptography-38.0.1 jaraco.classes-3.2.3 jedi-0.18.1 jeepney-0.8.0 jupyter-1.0.0 keyring-23.9.3 pkginfo-1.8.3 qtconsole-5.3.2 qtpy-2.2.1 readme-renderer-37.2 requests-toolbelt-0.10.0 rfc3986-2.0.0 tensorflow-object-detection-api-0.1.1 twine-3.7.1
SKU-110K images were collected from thousands of supermarket stores around the world, including locations in the United States, Europe, and East Asia. Dozens of paid associates acquired our images, using their personal cellphone cameras. Images were originally taken at no less than five mega-pixel resolution but were then JPEG compressed at one megapixel.
The SKU110K dataset provides 11,762 images with more than 1.7 million annotated bounding boxes captured in densely packed scenarios, including 8,233 images for training, 588 images for validation, and 2,941 images for testing. There are around 1,733,678 instances in total. The images are collected from thousands of supermarket stores and are of various scales, viewing angles, lighting conditions, and noise levels. All the images are resized into a resolution of one megapixel. Most of the instances in the dataset are tightly packed and typically of a certain orientation
We selected a collection of detection models and pre-trained them on the SKU-110K dataset such as the EfficientDet D1 640x640, SSD MobileNet V1 FPN 640x640, and SSDResNet50 V1 FPN from TensorFlow 2 Detection Model Zoo and Detecto Module in Pytorch. These models are useful for initialization when training on our new datasets. By comparing the performance of these models, we have concluded that SSD-ResNet50 delivers better performance with respect to real-time detection. We trained our model based upon the SSDResNet50 V1 FPN Architecture. The entire workflow of the SSD-ResNet50 V1 FPN Architecture is illustrated in Figure 3. SSD with the ResNet50 V1 FPN feature extractor in its architecture is an object detection model that has been trained on the COCO 2017 dataset. A Momentum optimizer with a learning rate of 0.04 was used for the region proposal and classification network, and the learning rate was reduced on the plateau. As shown in Figure 3, the Feature Pyramid Network (FPN) generates the multi-level features as inputs to the SSDResNet50 Architecture. The FPN is an extractor and provides the extracted feature maps layers to an object detector. When the model localizes any small object, it draws an object boundary ox around it at each location. After training the model, the testing procedure was carried out by providing the surgical videos as input to the trained model. Afterward, we used Tensorboard which is a suitable feature of the TensorFlow Object Detection API. It allowed us to continuously monitor and visualize several different training/evaluation metrics when our model was being trained. As the final step, we obtained the output video containing the labeled surgical instruments and the assessment results along with the log file. The generated log file records the surgical assessment, the bounding box for each laparoscopic instrument, and the center point of each laparoscopic instrument
import os
CUSTOM_MODEL_NAME = 'ssd_mobilenet_v1'
PRETRAINED_MODEL_NAME = 'ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8'
PRETRAINED_MODEL_URL = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz'
TF_RECORD_SCRIPT_NAME = 'generate_tfrecord.py'
LABEL_MAP_NAME = 'label_map.pbtxt'
CUSTOM_MODEL_NAME2 = 'ssd_resnet101_v1'
PRETRAINED_MODEL_NAME2 = 'ssd_resnet101_v1_fpn_640x640_coco17_tpu-8'
PRETRAINED_MODEL_URL2 = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz'
CUSTOM_MODEL_NAME3 = 'ssd_mobilenet_v2_fpnlite'
PRETRAINED_MODEL_NAME3 = 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8'
PRETRAINED_MODEL_URL3 = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz'
paths = {
'WORKSPACE_PATH': os.path.join('Tensorflow', 'workspace'),
'SCRIPTS_PATH': os.path.join('Tensorflow','scripts'),
'APIMODEL_PATH': os.path.join('Tensorflow','models'),
'ANNOTATION_PATH': os.path.join('Tensorflow', 'workspace','annotations'),
'IMAGE_PATH': os.path.join('Tensorflow', 'workspace','images'),
'MODEL_PATH': os.path.join('Tensorflow', 'workspace','models'),
'PRETRAINED_MODEL_PATH': os.path.join('Tensorflow', 'workspace','pre-trained-models'),
'CHECKPOINT_PATH': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME),
'CHECKPOINT_PATH2': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME2),
'CHECKPOINT_PATH3': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME3),
'OUTPUT_PATH': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'export'),
'OUTPUT_PATH2': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME2, 'export'),
'OUTPUT_PATH3': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME3, 'export'),
'TFJS_PATH':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'tfjsexport'),
'TFJS_PATH2':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME2, 'tfjsexport'),
'TFJS_PATH3':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME3, 'tfjsexport'),
'TFLITE_PATH':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'tfliteexport'),
'TFLITE_PATH2':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME2, 'tfliteexport'),
'TFLITE_PATH3':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME3, 'tfliteexport'),
'PROTOC_PATH':os.path.join('Tensorflow','protoc')
}
files = {
'PIPELINE_CONFIG':os.path.join('Tensorflow', 'workspace','models', CUSTOM_MODEL_NAME, 'pipeline.config'),
'PIPELINE_CONFIG2':os.path.join('Tensorflow', 'workspace','models', CUSTOM_MODEL_NAME2, 'pipeline.config'),
'PIPELINE_CONFIG3':os.path.join('Tensorflow', 'workspace','models', CUSTOM_MODEL_NAME3, 'pipeline.config'),
'TF_RECORD_SCRIPT': os.path.join(paths['SCRIPTS_PATH'], TF_RECORD_SCRIPT_NAME),
'LABELMAP': os.path.join(paths['ANNOTATION_PATH'], LABEL_MAP_NAME)
}
for path in paths.values():
if not os.path.exists(path):
if os.name == 'posix':
!mkdir -p {path}
if os.name == 'nt':
!mkdir {path}
The model we shall be using in our examples is the SSD ResNet50 V1 FPN 640x640 model, since it provides a relatively good trade-off between performance and speed. However, there exist a number of other models you can use, all of which are listed in TensorFlow 2 Detection Model Zoo.
Once the *.tar.gz file has been downloaded, open it using a decompression program of your choice (e.g. 7zip, WinZIP, etc.). Next, open the *.tar folder that you see when the compressed folder is opened, and extract its contents inside the folder training_demo/pre-trained-models.
# https://www.tensorflow.org/install/source_windows
if os.name=='nt':
!pip install wget
import wget
if not os.path.exists(os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection')):
!git clone https://github.com/tensorflow/models {paths['APIMODEL_PATH']}
Cloning into 'Tensorflow/models'... remote: Enumerating objects: 78241, done. remote: Counting objects: 100% (44/44), done. remote: Compressing objects: 100% (37/37), done. remote: Total 78241 (delta 19), reused 18 (delta 7), pack-reused 78197 Receiving objects: 100% (78241/78241), 593.49 MiB | 26.29 MiB/s, done. Resolving deltas: 100% (55625/55625), done.
# Install Tensorflow Object Detection
if os.name=='posix':
!apt-get install protobuf-compiler
!cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .
if os.name=='nt':
url="https://github.com/protocolbuffers/protobuf/releases/download/v3.15.6/protoc-3.15.6-win64.zip"
wget.download(url)
!move protoc-3.15.6-win64.zip {paths['PROTOC_PATH']}
!cd {paths['PROTOC_PATH']} && tar -xf protoc-3.15.6-win64.zip
os.environ['PATH'] += os.pathsep + os.path.abspath(os.path.join(paths['PROTOC_PATH'], 'bin'))
!cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && copy object_detection\\packages\\tf2\\setup.py setup.py && python setup.py build && python setup.py install
!cd Tensorflow/models/research/slim && pip install -e .
Reading package lists... Done
Building dependency tree
Reading state information... Done
protobuf-compiler is already the newest version (3.0.0-9.1ubuntu1).
The following package was automatically installed and is no longer required:
libnvidia-common-460
Use 'apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 27 not upgraded.
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Processing /content/Tensorflow/models/research
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Collecting avro-python3
Downloading avro-python3-1.10.2.tar.gz (38 kB)
Collecting apache-beam
Downloading apache_beam-2.42.0-cp37-cp37m-manylinux2010_x86_64.whl (11.0 MB)
|████████████████████████████████| 11.0 MB 1.5 MB/s
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (7.1.2)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (4.9.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (3.2.2)
Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.29.32)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.5.5)
Collecting tf-slim
Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
|████████████████████████████████| 352 kB 60.2 MB/s
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.15.0)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.0.5)
Collecting lvis
Downloading lvis-0.5.3-py3-none-any.whl (14 kB)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.7.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.3.5)
Collecting tf-models-official>=2.5.1
Downloading tf_models_official-2.10.0-py2.py3-none-any.whl (2.2 MB)
|████████████████████████████████| 2.2 MB 58.2 MB/s
Collecting tensorflow_io
Downloading tensorflow_io-0.27.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (25.0 MB)
|████████████████████████████████| 25.0 MB 94.4 MB/s
Requirement already satisfied: keras in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.9.0)
Collecting pyparsing==2.4.7
Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
|████████████████████████████████| 67 kB 5.8 MB/s
Collecting sacrebleu<=2.2.0
Downloading sacrebleu-2.2.0-py3-none-any.whl (116 kB)
|████████████████████████████████| 116 kB 73.8 MB/s
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu<=2.2.0->object-detection==0.1) (0.8.10)
Collecting portalocker
Downloading portalocker-2.6.0-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: colorama in /usr/local/lib/python3.7/dist-packages (from sacrebleu<=2.2.0->object-detection==0.1) (0.4.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from sacrebleu<=2.2.0->object-detection==0.1) (2022.6.2)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from sacrebleu<=2.2.0->object-detection==0.1) (1.21.6)
Collecting opencv-python-headless==4.5.2.52
Downloading opencv_python_headless-4.5.2.52-cp37-cp37m-manylinux2014_x86_64.whl (38.2 MB)
|████████████████████████████████| 38.2 MB 72 kB/s
Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.12.0)
Requirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.5.12)
Collecting tensorflow-text~=2.10.0
Downloading tensorflow_text-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
|████████████████████████████████| 5.9 MB 52.4 MB/s
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.6.0)
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (5.4.8)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.1.3)
Collecting tensorflow-addons
Downloading tensorflow_addons-0.18.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 46.4 MB/s
Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.12.11)
Collecting py-cpuinfo>=3.3.0
Downloading py-cpuinfo-8.0.0.tar.gz (99 kB)
|████████████████████████████████| 99 kB 9.8 MB/s
Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.5.0)
Collecting immutabledict
Downloading immutabledict-2.2.1-py3-none-any.whl (4.0 kB)
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|████████████████████████████████| 43 kB 1.9 MB/s
Collecting pyyaml<6.0,>=5.1
Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
|████████████████████████████████| 636 kB 51.1 MB/s
Collecting tensorflow-model-optimization>=0.4.1
Downloading tensorflow_model_optimization-0.7.3-py2.py3-none-any.whl (238 kB)
|████████████████████████████████| 238 kB 72.5 MB/s
Collecting tensorflow~=2.10.0
Downloading tensorflow-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (578.0 MB)
|████████████████████████████████| 578.0 MB 15 kB/s
Collecting sentencepiece
Downloading sentencepiece-0.1.97-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
|████████████████████████████████| 1.3 MB 56.7 MB/s
Requirement already satisfied: google-api-core<3dev,>=1.21.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.31.6)
Requirement already satisfied: httplib2<1dev,>=0.15.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.17.4)
Requirement already satisfied: google-auth<3dev,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.35.0)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.1)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.0.4)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (57.4.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (21.3)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.23.0)
Requirement already satisfied: protobuf<4.0.0dev,>=3.12.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.17.3)
Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2022.4)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.56.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.9)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.2.8)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (6.1.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (4.64.1)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.24.3)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2.8.2)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2022.9.24)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.4.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.10)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (4.1.1)
Collecting keras
Downloading keras-2.10.0-py2.py3-none-any.whl (1.7 MB)
|████████████████████████████████| 1.7 MB 58.4 MB/s
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.6.3)
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (14.0.6)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.3.0)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.14.1)
Collecting flatbuffers>=2.0
Downloading flatbuffers-22.9.24-py2.py3-none-any.whl (26 kB)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.2.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.49.1)
Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.2)
Collecting tensorboard<2.11,>=2.10
Downloading tensorboard-2.10.1-py3-none-any.whl (5.9 MB)
|████████████████████████████████| 5.9 MB 50.5 MB/s
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (2.0.1)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.27.0)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
Collecting tensorflow-estimator<2.11,>=2.10.0
Downloading tensorflow_estimator-2.10.0-py2.py3-none-any.whl (438 kB)
|████████████████████████████████| 438 kB 63.7 MB/s
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse>=1.6.0->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.37.1)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.5.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.8.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.6)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (0.6.1)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (3.4.1)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (4.13.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (3.9.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.11,>=2.10->tensorflow~=2.10.0->tf-models-official>=2.5.1->object-detection==0.1) (3.2.1)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-official>=2.5.1->object-detection==0.1) (0.1.7)
Requirement already satisfied: pyarrow<8.0.0,>=0.15.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (6.0.1)
Collecting fastavro<2,>=0.23.6
Downloading fastavro-1.6.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.4 MB)
|████████████████████████████████| 2.4 MB 54.0 MB/s
Collecting pymongo<4.0.0,>=3.8.0
Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB)
|████████████████████████████████| 508 kB 71.5 MB/s
Collecting dill<0.3.2,>=0.3.1.1
Downloading dill-0.3.1.1.tar.gz (151 kB)
|████████████████████████████████| 151 kB 75.0 MB/s
Collecting hdfs<3.0.0,>=2.1.0
Downloading hdfs-2.7.0-py3-none-any.whl (34 kB)
Collecting orjson<4.0
Downloading orjson-3.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270 kB)
|████████████████████████████████| 270 kB 72.8 MB/s
Collecting proto-plus<2,>=1.7.1
Downloading proto_plus-1.22.1-py3-none-any.whl (47 kB)
|████████████████████████████████| 47 kB 5.5 MB/s
Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.7)
Collecting cloudpickle~=2.1.0
Downloading cloudpickle-2.1.0-py3-none-any.whl (25 kB)
Collecting zstandard<1,>=0.18.0
Downloading zstandard-0.18.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)
|████████████████████████████████| 2.5 MB 47.0 MB/s
Collecting requests<3.0.0dev,>=2.18.0
Downloading requests-2.28.1-py3-none-any.whl (62 kB)
|████████████████████████████████| 62 kB 1.5 MB/s
Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam->object-detection==0.1) (1.3.0)
Collecting docopt
Downloading docopt-0.6.2.tar.gz (25 kB)
Collecting protobuf<4.0.0dev,>=3.12.0
Downloading protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 56.9 MB/s
Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<3dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.1.1)
Requirement already satisfied: opencv-python>=4.1.0.25 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (4.6.0.66)
Requirement already satisfied: kiwisolver>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (1.4.4)
Requirement already satisfied: cycler>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from lvis->object-detection==0.1) (0.11.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.3)
Requirement already satisfied: scikit-learn>=0.21.3 in /usr/local/lib/python3.7/dist-packages (from seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.0.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.2.0)
Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons->tf-models-official>=2.5.1->object-detection==0.1) (2.7.1)
Requirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (2.3)
Requirement already satisfied: etils[epath] in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (0.8.0)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (1.10.0)
Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (5.10.0)
Requirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (0.10.2)
Building wheels for collected packages: object-detection, py-cpuinfo, dill, avro-python3, docopt, seqeval
Building wheel for object-detection (setup.py) ... done
Created wheel for object-detection: filename=object_detection-0.1-py3-none-any.whl size=1696560 sha256=e9879a9a57f296e0b03616065c52ad37ea4942e77ad2f02f33b9471971b1935a
Stored in directory: /tmp/pip-ephem-wheel-cache-dggcb27h/wheels/a9/26/bf/1cb2313ed4855917889b97658bf0a19999e3588e47867bdaee
Building wheel for py-cpuinfo (setup.py) ... done
Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=8bd231d5c167276fe10de1fa977a9be562344543a25d43b2c04ec6a91d3d3f56
Stored in directory: /root/.cache/pip/wheels/d2/f1/1f/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401
Building wheel for dill (setup.py) ... done
Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=37b3ec7f01226ae3f2edd67751dcee29735d08d4916d2917230504f82341833e
Stored in directory: /root/.cache/pip/wheels/a4/61/fd/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f
Building wheel for avro-python3 (setup.py) ... done
Created wheel for avro-python3: filename=avro_python3-1.10.2-py3-none-any.whl size=44010 sha256=4221c9cbea205184fd4e17b664b1f6630f278af7f208f622ce0119839daabc92
Stored in directory: /root/.cache/pip/wheels/d6/e5/b1/6b151d9b535ee50aaa6ab27d145a0104b6df02e5636f0376da
Building wheel for docopt (setup.py) ... done
Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13723 sha256=252ca163f6c1f272e85952f31960de63d9f8d0e0626f555cc9b38e22f82bd111
Stored in directory: /root/.cache/pip/wheels/72/b0/3f/1d95f96ff986c7dfffe46ce2be4062f38ebd04b506c77c81b9
Building wheel for seqeval (setup.py) ... done
Created wheel for seqeval: filename=seqeval-1.2.2-py3-none-any.whl size=16180 sha256=a17e086ee4bf56a584d1133a7232bade977649892a175d2fff95f92c943bfb27
Stored in directory: /root/.cache/pip/wheels/05/96/ee/7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
Successfully built object-detection py-cpuinfo dill avro-python3 docopt seqeval
Installing collected packages: requests, pyparsing, protobuf, tensorflow-estimator, tensorboard, keras, flatbuffers, tensorflow, portalocker, docopt, dill, zstandard, tf-slim, tensorflow-text, tensorflow-model-optimization, tensorflow-addons, seqeval, sentencepiece, sacrebleu, pyyaml, pymongo, py-cpuinfo, proto-plus, orjson, opencv-python-headless, immutabledict, hdfs, fastavro, cloudpickle, tf-models-official, tensorflow-io, lvis, avro-python3, apache-beam, object-detection
Attempting uninstall: requests
Found existing installation: requests 2.23.0
Uninstalling requests-2.23.0:
Successfully uninstalled requests-2.23.0
Attempting uninstall: pyparsing
Found existing installation: pyparsing 3.0.9
Uninstalling pyparsing-3.0.9:
Successfully uninstalled pyparsing-3.0.9
Attempting uninstall: protobuf
Found existing installation: protobuf 3.17.3
Uninstalling protobuf-3.17.3:
Successfully uninstalled protobuf-3.17.3
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.9.0
Uninstalling tensorflow-estimator-2.9.0:
Successfully uninstalled tensorflow-estimator-2.9.0
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.9.1
Uninstalling tensorboard-2.9.1:
Successfully uninstalled tensorboard-2.9.1
Attempting uninstall: keras
Found existing installation: keras 2.9.0
Uninstalling keras-2.9.0:
Successfully uninstalled keras-2.9.0
Attempting uninstall: flatbuffers
Found existing installation: flatbuffers 1.12
Uninstalling flatbuffers-1.12:
Successfully uninstalled flatbuffers-1.12
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.9.2
Uninstalling tensorflow-2.9.2:
Successfully uninstalled tensorflow-2.9.2
Attempting uninstall: dill
Found existing installation: dill 0.3.5.1
Uninstalling dill-0.3.5.1:
Successfully uninstalled dill-0.3.5.1
Attempting uninstall: pyyaml
Found existing installation: PyYAML 6.0
Uninstalling PyYAML-6.0:
Successfully uninstalled PyYAML-6.0
Attempting uninstall: pymongo
Found existing installation: pymongo 4.2.0
Uninstalling pymongo-4.2.0:
Successfully uninstalled pymongo-4.2.0
Attempting uninstall: opencv-python-headless
Found existing installation: opencv-python-headless 4.6.0.66
Uninstalling opencv-python-headless-4.6.0.66:
Successfully uninstalled opencv-python-headless-4.6.0.66
Attempting uninstall: cloudpickle
Found existing installation: cloudpickle 1.5.0
Uninstalling cloudpickle-1.5.0:
Successfully uninstalled cloudpickle-1.5.0
Successfully installed apache-beam-2.42.0 avro-python3-1.10.2 cloudpickle-2.1.0 dill-0.3.1.1 docopt-0.6.2 fastavro-1.6.1 flatbuffers-22.9.24 hdfs-2.7.0 immutabledict-2.2.1 keras-2.10.0 lvis-0.5.3 object-detection-0.1 opencv-python-headless-4.5.2.52 orjson-3.8.0 portalocker-2.6.0 proto-plus-1.22.1 protobuf-3.19.6 py-cpuinfo-8.0.0 pymongo-3.12.3 pyparsing-2.4.7 pyyaml-5.4.1 requests-2.28.1 sacrebleu-2.2.0 sentencepiece-0.1.97 seqeval-1.2.2 tensorboard-2.10.1 tensorflow-2.10.0 tensorflow-addons-0.18.0 tensorflow-estimator-2.10.0 tensorflow-io-0.27.0 tensorflow-model-optimization-0.7.3 tensorflow-text-2.10.0 tf-models-official-2.10.0 tf-slim-1.1.0 zstandard-0.18.0
!pip list
Package Version ------------------------------- ---------------------- absl-py 1.3.0 aeppl 0.0.33 aesara 2.7.9 aiohttp 3.8.3 aiosignal 1.2.0 alabaster 0.7.12 albumentations 1.2.1 altair 4.2.0 apache-beam 2.42.0 appdirs 1.4.4 arviz 0.12.1 astor 0.8.1 astropy 4.3.1 astunparse 1.6.3 async-timeout 4.0.2 asynctest 0.13.0 atari-py 0.2.9 atomicwrites 1.4.1 attrs 22.1.0 audioread 3.0.0 autograd 1.5 avro-python3 1.10.2 Babel 2.10.3 backcall 0.2.0 beautifulsoup4 4.6.3 bleach 5.0.1 blis 0.7.8 bokeh 2.3.3 branca 0.5.0 bs4 0.0.1 CacheControl 0.12.11 cached-property 1.5.2 cachetools 4.2.4 catalogue 2.0.8 certifi 2022.9.24 cffi 1.15.1 cftime 1.6.2 chardet 3.0.4 charset-normalizer 2.1.1 click 7.1.2 clikit 0.6.2 cloudpickle 2.1.0 cmake 3.22.6 cmdstanpy 1.0.7 colorama 0.4.5 colorcet 3.0.1 colorlover 0.3.0 community 1.0.0b1 confection 0.0.3 cons 0.4.5 contextlib2 0.5.5 convertdate 2.4.0 crashtest 0.3.1 crcmod 1.7 cryptography 38.0.1 cufflinks 0.17.3 cupy-cuda11x 11.0.0 cvxopt 1.3.0 cvxpy 1.2.1 cycler 0.11.0 cymem 2.0.7 Cython 0.29.32 daft 0.0.4 dask 2022.2.0 datascience 0.17.5 debugpy 1.0.0 decorator 4.4.2 defusedxml 0.7.1 descartes 1.1.0 dill 0.3.1.1 distributed 2022.2.0 dlib 19.24.0 dm-tree 0.1.7 docopt 0.6.2 docutils 0.17.1 dopamine-rl 1.0.5 earthengine-api 0.1.327 easydict 1.10 ecos 2.0.10 editdistance 0.5.3 en-core-web-sm 3.4.1 entrypoints 0.4 ephem 4.1.3 et-xmlfile 1.1.0 etils 0.8.0 etuples 0.3.8 fa2 0.3.5 fastai 2.7.9 fastavro 1.6.1 fastcore 1.5.27 fastdownload 0.0.7 fastdtw 0.3.4 fastjsonschema 2.16.2 fastprogress 1.0.3 fastrlock 0.8 feather-format 0.4.1 filelock 3.8.0 firebase-admin 4.4.0 fix-yahoo-finance 0.0.22 Flask 1.1.4 flatbuffers 22.9.24 folium 0.12.1.post1 frozenlist 1.3.1 fsspec 2022.8.2 future 0.16.0 gast 0.4.0 GDAL 2.2.2 gdown 4.4.0 gensim 3.6.0 geographiclib 1.52 geopy 1.17.0 gin-config 0.5.0 glob2 0.7 google 2.0.3 google-api-core 1.31.6 google-api-python-client 1.12.11 google-auth 1.35.0 google-auth-httplib2 0.0.4 google-auth-oauthlib 0.4.6 google-cloud-bigquery 1.21.0 google-cloud-bigquery-storage 1.1.2 google-cloud-core 1.0.3 google-cloud-datastore 1.8.0 google-cloud-firestore 1.7.0 google-cloud-language 1.2.0 google-cloud-storage 1.18.1 google-cloud-translate 1.5.0 google-colab 1.0.0 google-pasta 0.2.0 google-resumable-media 0.4.1 googleapis-common-protos 1.56.4 googledrivedownloader 0.4 graphviz 0.10.1 greenlet 1.1.3.post0 grpcio 1.49.1 gspread 3.4.2 gspread-dataframe 3.0.8 gym 0.25.2 gym-notices 0.0.8 h5py 3.1.0 hdfs 2.7.0 HeapDict 1.0.1 hijri-converter 2.2.4 holidays 0.16 holoviews 1.14.9 html5lib 1.0.1 httpimport 0.5.18 httplib2 0.17.4 httplib2shim 0.0.3 httpstan 4.6.1 humanize 0.5.1 hyperopt 0.1.2 idna 2.10 imageio 2.9.0 imagesize 1.4.1 imbalanced-learn 0.8.1 imblearn 0.0 imgaug 0.4.0 immutabledict 2.2.1 importlib-metadata 4.13.0 importlib-resources 5.10.0 imutils 0.5.4 inflect 2.1.0 intel-openmp 2022.2.0 intervaltree 2.1.0 ipykernel 5.3.4 ipython 7.9.0 ipython-genutils 0.2.0 ipython-sql 0.3.9 ipywidgets 7.7.1 itsdangerous 1.1.0 jaraco.classes 3.2.3 jax 0.3.23 jaxlib 0.3.22+cuda11.cudnn805 jedi 0.18.1 jeepney 0.8.0 jieba 0.42.1 Jinja2 2.11.3 joblib 1.2.0 jpeg4py 0.1.4 jsonschema 4.3.3 jupyter 1.0.0 jupyter-client 6.1.12 jupyter-console 6.1.0 jupyter-core 4.11.1 jupyterlab-widgets 3.0.3 kaggle 1.5.12 kapre 0.3.7 keras 2.10.0 Keras-Preprocessing 1.1.2 keras-vis 0.4.1 keyring 23.9.3 kiwisolver 1.4.4 korean-lunar-calendar 0.3.1 langcodes 3.3.0 libclang 14.0.6 librosa 0.8.1 lightgbm 2.2.3 llvmlite 0.39.1 lmdb 0.99 locket 1.0.0 logical-unification 0.4.5 LunarCalendar 0.0.9 lvis 0.5.3 lxml 4.9.1 Markdown 3.4.1 MarkupSafe 2.0.1 marshmallow 3.18.0 matplotlib 3.2.2 matplotlib-venn 0.11.7 miniKanren 1.0.3 missingno 0.5.1 mistune 0.8.4 mizani 0.7.3 mkl 2019.0 mlxtend 0.14.0 more-itertools 8.14.0 moviepy 0.2.3.5 mpmath 1.2.1 msgpack 1.0.4 multidict 6.0.2 multipledispatch 0.6.0 multitasking 0.0.11 murmurhash 1.0.9 music21 5.5.0 natsort 5.5.0 nbconvert 5.6.1 nbformat 5.7.0 netCDF4 1.6.1 networkx 2.6.3 nibabel 3.0.2 nltk 3.7 notebook 5.5.0 numba 0.56.3 numexpr 2.8.3 numpy 1.21.6 oauth2client 4.1.3 oauthlib 3.2.1 object-detection 0.1 okgrade 0.4.3 opencv-contrib-python 4.6.0.66 opencv-python 4.6.0.66 opencv-python-headless 4.5.2.52 openpyxl 3.0.10 opt-einsum 3.3.0 orjson 3.8.0 osqp 0.6.2.post0 packaging 21.3 palettable 3.3.0 pandas 1.3.5 pandas-datareader 0.9.0 pandas-gbq 0.13.3 pandas-profiling 1.4.1 pandocfilters 1.5.0 panel 0.12.1 param 1.12.2 parso 0.8.3 partd 1.3.0 pastel 0.2.1 pathlib 1.0.1 pathy 0.6.2 patsy 0.5.3 pep517 0.13.0 pexpect 4.8.0 pickleshare 0.7.5 Pillow 7.1.2 pip 21.1.3 pip-tools 6.2.0 pkginfo 1.8.3 plotly 5.5.0 plotnine 0.8.0 pluggy 0.7.1 pooch 1.6.0 portalocker 2.6.0 portpicker 1.3.9 prefetch-generator 1.0.1 preshed 3.0.8 prettytable 3.4.1 progressbar2 3.38.0 promise 2.3 prompt-toolkit 2.0.10 prophet 1.1.1 proto-plus 1.22.1 protobuf 3.19.6 psutil 5.4.8 psycopg2 2.9.4 ptyprocess 0.7.0 py 1.11.0 py-cpuinfo 8.0.0 pyarrow 6.0.1 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycocotools 2.0.5 pycparser 2.21 pyct 0.4.8 pydantic 1.9.2 pydata-google-auth 1.4.0 pydot 1.3.0 pydot-ng 2.0.0 pydotplus 2.0.2 PyDrive 1.3.1 pyemd 0.5.1 pyerfa 2.0.0.1 Pygments 2.6.1 pygobject 3.26.1 pylev 1.4.0 pymc 4.1.4 PyMeeus 0.5.11 pymongo 3.12.3 pymystem3 0.2.0 PyOpenGL 3.1.6 pyparsing 2.4.7 pyrsistent 0.18.1 pysimdjson 3.2.0 pysndfile 1.3.8 PySocks 1.7.1 pystan 3.3.0 pytest 3.6.4 python-apt 0.0.0 python-chess 0.23.11 python-dateutil 2.8.2 python-louvain 0.16 python-slugify 6.1.2 python-utils 3.3.3 pytz 2022.4 pyviz-comms 2.2.1 PyWavelets 1.3.0 PyYAML 5.4.1 pyzmq 23.2.1 qdldl 0.1.5.post2 qtconsole 5.3.2 QtPy 2.2.1 qudida 0.0.4 readme-renderer 37.2 regex 2022.6.2 requests 2.28.1 requests-oauthlib 1.3.1 requests-toolbelt 0.10.0 resampy 0.4.2 rfc3986 2.0.0 rpy2 3.4.5 rsa 4.9 sacrebleu 2.2.0 scikit-image 0.18.3 scikit-learn 1.0.2 scipy 1.7.3 screen-resolution-extra 0.0.0 scs 3.2.0 seaborn 0.11.2 SecretStorage 3.3.3 Send2Trash 1.8.0 sentencepiece 0.1.97 seqeval 1.2.2 setuptools 57.4.0 setuptools-git 1.2 Shapely 1.8.5.post1 six 1.15.0 sklearn-pandas 1.8.0 smart-open 5.2.1 snowballstemmer 2.2.0 sortedcontainers 2.4.0 soundfile 0.11.0 spacy 3.4.1 spacy-legacy 3.0.10 spacy-loggers 1.0.3 Sphinx 1.8.6 sphinxcontrib-serializinghtml 1.1.5 sphinxcontrib-websupport 1.2.4 SQLAlchemy 1.4.41 sqlparse 0.4.3 srsly 2.4.4 statsmodels 0.12.2 sympy 1.7.1 tables 3.7.0 tabulate 0.8.10 tblib 1.7.0 tenacity 8.1.0 tensorboard 2.10.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.10.0 tensorflow-addons 0.18.0 tensorflow-datasets 4.6.0 tensorflow-estimator 2.10.0 tensorflow-gcs-config 2.9.1 tensorflow-hub 0.12.0 tensorflow-io 0.27.0 tensorflow-io-gcs-filesystem 0.27.0 tensorflow-metadata 1.10.0 tensorflow-model-optimization 0.7.3 tensorflow-object-detection-api 0.1.1 tensorflow-probability 0.16.0 tensorflow-text 2.10.0 termcolor 2.0.1 terminado 0.13.3 testpath 0.6.0 text-unidecode 1.3 textblob 0.15.3 tf-models-official 2.10.0 tf-slim 1.1.0 thinc 8.1.4 threadpoolctl 3.1.0 tifffile 2021.11.2 toml 0.10.2 tomli 2.0.1 toolz 0.12.0 torch 1.12.1+cu113 torchaudio 0.12.1+cu113 torchsummary 1.5.1 torchtext 0.13.1 torchvision 0.13.1+cu113 tornado 5.1.1 tqdm 4.64.1 traitlets 5.1.1 tweepy 3.10.0 twine 3.7.1 typeguard 2.7.1 typer 0.4.2 typing-extensions 4.1.1 tzlocal 1.5.1 ujson 5.5.0 uritemplate 3.0.1 urllib3 1.24.3 vega-datasets 0.9.0 wasabi 0.10.1 wcwidth 0.2.5 webargs 8.2.0 webencodings 0.5.1 Werkzeug 1.0.1 wheel 0.37.1 widgetsnbextension 3.6.1 wordcloud 1.8.2.2 wrapt 1.14.1 xarray 0.20.2 xarray-einstats 0.2.2 xgboost 0.90 xkit 0.0.0 xlrd 1.1.0 xlwt 1.3.0 yarl 1.8.1 yellowbrick 1.5 zict 2.2.0 zipp 3.9.0 zstandard 0.18.0
Verification script for the installation verification.
VERIFICATION_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'builders', 'model_builder_tf2_test.py')
# Verify Installation
!python {VERIFICATION_SCRIPT}
2022-10-22 13:05:20.594956: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2022-10-22 13:05:21.688679: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:05:21.688914: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:05:21.688937: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Running tests under Python 3.7.15: /usr/bin/python3 [ RUN ] ModelBuilderTF2Test.test_create_center_net_deepmac 2022-10-22 13:05:25.448235: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. W1022 13:05:25.815008 140171267708800 model_builder.py:1109] Building experimental DeepMAC meta-arch. Some features may be omitted. INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_center_net_deepmac): 1.6s I1022 13:05:26.100098 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_center_net_deepmac): 1.6s [ OK ] ModelBuilderTF2Test.test_create_center_net_deepmac [ RUN ] ModelBuilderTF2Test.test_create_center_net_model0 (customize_head_params=True) INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_center_net_model0 (customize_head_params=True)): 0.54s I1022 13:05:26.637552 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_center_net_model0 (customize_head_params=True)): 0.54s [ OK ] ModelBuilderTF2Test.test_create_center_net_model0 (customize_head_params=True) [ RUN ] ModelBuilderTF2Test.test_create_center_net_model1 (customize_head_params=False) INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_center_net_model1 (customize_head_params=False)): 0.26s I1022 13:05:26.894291 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_center_net_model1 (customize_head_params=False)): 0.26s [ OK ] ModelBuilderTF2Test.test_create_center_net_model1 (customize_head_params=False) [ RUN ] ModelBuilderTF2Test.test_create_center_net_model_from_keypoints INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_center_net_model_from_keypoints): 0.35s I1022 13:05:27.247210 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_center_net_model_from_keypoints): 0.35s [ OK ] ModelBuilderTF2Test.test_create_center_net_model_from_keypoints [ RUN ] ModelBuilderTF2Test.test_create_center_net_model_mobilenet INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_center_net_model_mobilenet): 1.93s I1022 13:05:29.179542 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_center_net_model_mobilenet): 1.93s [ OK ] ModelBuilderTF2Test.test_create_center_net_model_mobilenet [ RUN ] ModelBuilderTF2Test.test_create_experimental_model INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_experimental_model): 0.0s I1022 13:05:29.185572 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_experimental_model): 0.0s [ OK ] ModelBuilderTF2Test.test_create_experimental_model [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature0 (True) INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature0 (True)): 0.02s I1022 13:05:29.210199 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature0 (True)): 0.02s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature0 (True) [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature1 (False) INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature1 (False)): 0.01s I1022 13:05:29.225592 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature1 (False)): 0.01s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_from_config_with_crop_feature1 (False) [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner): 0.02s I1022 13:05:29.241262 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner): 0.02s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_model_from_config_with_example_miner [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul): 0.09s I1022 13:05:29.333478 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul): 0.09s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul): 0.1s I1022 13:05:29.437408 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul): 0.1s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul): 0.1s I1022 13:05:29.534689 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul): 0.1s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul [ RUN ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul): 0.1s I1022 13:05:29.631435 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul): 0.1s [ OK ] ModelBuilderTF2Test.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul [ RUN ] ModelBuilderTF2Test.test_create_rfcn_model_from_config INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_rfcn_model_from_config): 0.21s I1022 13:05:29.842083 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_rfcn_model_from_config): 0.21s [ OK ] ModelBuilderTF2Test.test_create_rfcn_model_from_config [ RUN ] ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config): 0.05s I1022 13:05:29.888386 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config): 0.05s [ OK ] ModelBuilderTF2Test.test_create_ssd_fpn_model_from_config [ RUN ] ModelBuilderTF2Test.test_create_ssd_models_from_config I1022 13:05:30.186089 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b0 I1022 13:05:30.186297 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 64 I1022 13:05:30.186408 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 3 I1022 13:05:30.189784 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:30.231360 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:30.231522 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:30.349045 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:30.349238 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:30.672279 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:30.672492 140171267708800 efficientnet_model.py:143] round_filter input=40 output=40 I1022 13:05:31.315770 140171267708800 efficientnet_model.py:143] round_filter input=40 output=40 I1022 13:05:31.321671 140171267708800 efficientnet_model.py:143] round_filter input=80 output=80 I1022 13:05:32.263452 140171267708800 efficientnet_model.py:143] round_filter input=80 output=80 I1022 13:05:32.263679 140171267708800 efficientnet_model.py:143] round_filter input=112 output=112 I1022 13:05:33.531908 140171267708800 efficientnet_model.py:143] round_filter input=112 output=112 I1022 13:05:33.532914 140171267708800 efficientnet_model.py:143] round_filter input=192 output=192 I1022 13:05:34.750185 140171267708800 efficientnet_model.py:143] round_filter input=192 output=192 I1022 13:05:34.750547 140171267708800 efficientnet_model.py:143] round_filter input=320 output=320 I1022 13:05:35.079621 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=1280 I1022 13:05:35.190311 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.0, resolution=224, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:05:35.376646 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b1 I1022 13:05:35.377914 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 88 I1022 13:05:35.378026 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 4 I1022 13:05:35.386815 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:35.437492 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:35.437643 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:36.014590 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:36.022101 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:37.257986 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:37.261803 140171267708800 efficientnet_model.py:143] round_filter input=40 output=40 I1022 13:05:38.184516 140171267708800 efficientnet_model.py:143] round_filter input=40 output=40 I1022 13:05:38.184739 140171267708800 efficientnet_model.py:143] round_filter input=80 output=80 I1022 13:05:39.426510 140171267708800 efficientnet_model.py:143] round_filter input=80 output=80 I1022 13:05:39.428242 140171267708800 efficientnet_model.py:143] round_filter input=112 output=112 I1022 13:05:40.620887 140171267708800 efficientnet_model.py:143] round_filter input=112 output=112 I1022 13:05:40.621126 140171267708800 efficientnet_model.py:143] round_filter input=192 output=192 I1022 13:05:42.226993 140171267708800 efficientnet_model.py:143] round_filter input=192 output=192 I1022 13:05:42.227209 140171267708800 efficientnet_model.py:143] round_filter input=320 output=320 I1022 13:05:42.791928 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=1280 I1022 13:05:42.843137 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.1, resolution=240, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:05:42.987412 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b2 I1022 13:05:42.997797 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 112 I1022 13:05:42.997961 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 5 I1022 13:05:43.000517 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:43.030355 140171267708800 efficientnet_model.py:143] round_filter input=32 output=32 I1022 13:05:43.030548 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:43.293219 140171267708800 efficientnet_model.py:143] round_filter input=16 output=16 I1022 13:05:43.293429 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:43.915967 140171267708800 efficientnet_model.py:143] round_filter input=24 output=24 I1022 13:05:43.916163 140171267708800 efficientnet_model.py:143] round_filter input=40 output=48 I1022 13:05:44.650078 140171267708800 efficientnet_model.py:143] round_filter input=40 output=48 I1022 13:05:44.650319 140171267708800 efficientnet_model.py:143] round_filter input=80 output=88 I1022 13:05:45.353831 140171267708800 efficientnet_model.py:143] round_filter input=80 output=88 I1022 13:05:45.354048 140171267708800 efficientnet_model.py:143] round_filter input=112 output=120 I1022 13:05:46.047439 140171267708800 efficientnet_model.py:143] round_filter input=112 output=120 I1022 13:05:46.047691 140171267708800 efficientnet_model.py:143] round_filter input=192 output=208 I1022 13:05:46.839373 140171267708800 efficientnet_model.py:143] round_filter input=192 output=208 I1022 13:05:46.839587 140171267708800 efficientnet_model.py:143] round_filter input=320 output=352 I1022 13:05:47.182832 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=1408 I1022 13:05:47.275254 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.1, depth_coefficient=1.2, resolution=260, dropout_rate=0.3, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:05:47.500313 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b3 I1022 13:05:47.500530 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 160 I1022 13:05:47.500616 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 6 I1022 13:05:47.503392 140171267708800 efficientnet_model.py:143] round_filter input=32 output=40 I1022 13:05:47.552235 140171267708800 efficientnet_model.py:143] round_filter input=32 output=40 I1022 13:05:47.552410 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:47.830249 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:47.830465 140171267708800 efficientnet_model.py:143] round_filter input=24 output=32 I1022 13:05:48.376808 140171267708800 efficientnet_model.py:143] round_filter input=24 output=32 I1022 13:05:48.377026 140171267708800 efficientnet_model.py:143] round_filter input=40 output=48 I1022 13:05:49.161965 140171267708800 efficientnet_model.py:143] round_filter input=40 output=48 I1022 13:05:49.162206 140171267708800 efficientnet_model.py:143] round_filter input=80 output=96 I1022 13:05:50.139662 140171267708800 efficientnet_model.py:143] round_filter input=80 output=96 I1022 13:05:50.139899 140171267708800 efficientnet_model.py:143] round_filter input=112 output=136 I1022 13:05:51.328795 140171267708800 efficientnet_model.py:143] round_filter input=112 output=136 I1022 13:05:51.334763 140171267708800 efficientnet_model.py:143] round_filter input=192 output=232 I1022 13:05:52.486835 140171267708800 efficientnet_model.py:143] round_filter input=192 output=232 I1022 13:05:52.487081 140171267708800 efficientnet_model.py:143] round_filter input=320 output=384 I1022 13:05:53.029103 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=1536 I1022 13:05:53.088741 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.2, depth_coefficient=1.4, resolution=300, dropout_rate=0.3, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:05:53.207647 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b4 I1022 13:05:53.207877 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 224 I1022 13:05:53.207977 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 7 I1022 13:05:53.210431 140171267708800 efficientnet_model.py:143] round_filter input=32 output=48 I1022 13:05:53.239423 140171267708800 efficientnet_model.py:143] round_filter input=32 output=48 I1022 13:05:53.239601 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:53.570562 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:53.570759 140171267708800 efficientnet_model.py:143] round_filter input=24 output=32 I1022 13:05:53.917072 140171267708800 efficientnet_model.py:143] round_filter input=24 output=32 I1022 13:05:53.917248 140171267708800 efficientnet_model.py:143] round_filter input=40 output=56 I1022 13:05:54.281583 140171267708800 efficientnet_model.py:143] round_filter input=40 output=56 I1022 13:05:54.281781 140171267708800 efficientnet_model.py:143] round_filter input=80 output=112 I1022 13:05:54.817797 140171267708800 efficientnet_model.py:143] round_filter input=80 output=112 I1022 13:05:54.817963 140171267708800 efficientnet_model.py:143] round_filter input=112 output=160 I1022 13:05:55.365173 140171267708800 efficientnet_model.py:143] round_filter input=112 output=160 I1022 13:05:55.365439 140171267708800 efficientnet_model.py:143] round_filter input=192 output=272 I1022 13:05:56.087500 140171267708800 efficientnet_model.py:143] round_filter input=192 output=272 I1022 13:05:56.087690 140171267708800 efficientnet_model.py:143] round_filter input=320 output=448 I1022 13:05:56.276348 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=1792 I1022 13:05:56.317178 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.4, depth_coefficient=1.8, resolution=380, dropout_rate=0.4, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:05:56.389096 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b5 I1022 13:05:56.389258 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 288 I1022 13:05:56.389336 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 7 I1022 13:05:56.390896 140171267708800 efficientnet_model.py:143] round_filter input=32 output=48 I1022 13:05:56.409529 140171267708800 efficientnet_model.py:143] round_filter input=32 output=48 I1022 13:05:56.409648 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:56.634036 140171267708800 efficientnet_model.py:143] round_filter input=16 output=24 I1022 13:05:56.634210 140171267708800 efficientnet_model.py:143] round_filter input=24 output=40 I1022 13:05:57.070012 140171267708800 efficientnet_model.py:143] round_filter input=24 output=40 I1022 13:05:57.070188 140171267708800 efficientnet_model.py:143] round_filter input=40 output=64 I1022 13:05:57.542007 140171267708800 efficientnet_model.py:143] round_filter input=40 output=64 I1022 13:05:57.542183 140171267708800 efficientnet_model.py:143] round_filter input=80 output=128 I1022 13:05:58.411527 140171267708800 efficientnet_model.py:143] round_filter input=80 output=128 I1022 13:05:58.411765 140171267708800 efficientnet_model.py:143] round_filter input=112 output=176 I1022 13:05:59.061208 140171267708800 efficientnet_model.py:143] round_filter input=112 output=176 I1022 13:05:59.061396 140171267708800 efficientnet_model.py:143] round_filter input=192 output=304 I1022 13:05:59.894794 140171267708800 efficientnet_model.py:143] round_filter input=192 output=304 I1022 13:05:59.894980 140171267708800 efficientnet_model.py:143] round_filter input=320 output=512 I1022 13:06:00.170370 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=2048 I1022 13:06:00.208472 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.6, depth_coefficient=2.2, resolution=456, dropout_rate=0.4, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:06:00.293001 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b6 I1022 13:06:00.293174 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 384 I1022 13:06:00.293250 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 8 I1022 13:06:00.294788 140171267708800 efficientnet_model.py:143] round_filter input=32 output=56 I1022 13:06:00.315005 140171267708800 efficientnet_model.py:143] round_filter input=32 output=56 I1022 13:06:00.315175 140171267708800 efficientnet_model.py:143] round_filter input=16 output=32 I1022 13:06:00.548431 140171267708800 efficientnet_model.py:143] round_filter input=16 output=32 I1022 13:06:00.548611 140171267708800 efficientnet_model.py:143] round_filter input=24 output=40 I1022 13:06:01.093845 140171267708800 efficientnet_model.py:143] round_filter input=24 output=40 I1022 13:06:01.094021 140171267708800 efficientnet_model.py:143] round_filter input=40 output=72 I1022 13:06:01.652525 140171267708800 efficientnet_model.py:143] round_filter input=40 output=72 I1022 13:06:01.652728 140171267708800 efficientnet_model.py:143] round_filter input=80 output=144 I1022 13:06:02.379097 140171267708800 efficientnet_model.py:143] round_filter input=80 output=144 I1022 13:06:02.379296 140171267708800 efficientnet_model.py:143] round_filter input=112 output=200 I1022 13:06:03.102002 140171267708800 efficientnet_model.py:143] round_filter input=112 output=200 I1022 13:06:03.102188 140171267708800 efficientnet_model.py:143] round_filter input=192 output=344 I1022 13:06:04.093955 140171267708800 efficientnet_model.py:143] round_filter input=192 output=344 I1022 13:06:04.094131 140171267708800 efficientnet_model.py:143] round_filter input=320 output=576 I1022 13:06:04.371623 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=2304 I1022 13:06:04.408350 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=1.8, depth_coefficient=2.6, resolution=528, dropout_rate=0.5, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') I1022 13:06:04.507040 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:150] EfficientDet EfficientNet backbone version: efficientnet-b7 I1022 13:06:04.507228 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:151] EfficientDet BiFPN num filters: 384 I1022 13:06:04.507302 140171267708800 ssd_efficientnet_bifpn_feature_extractor.py:153] EfficientDet BiFPN num iterations: 8 I1022 13:06:04.508928 140171267708800 efficientnet_model.py:143] round_filter input=32 output=64 I1022 13:06:04.530064 140171267708800 efficientnet_model.py:143] round_filter input=32 output=64 I1022 13:06:04.530254 140171267708800 efficientnet_model.py:143] round_filter input=16 output=32 I1022 13:06:04.842014 140171267708800 efficientnet_model.py:143] round_filter input=16 output=32 I1022 13:06:04.842214 140171267708800 efficientnet_model.py:143] round_filter input=24 output=48 I1022 13:06:05.725231 140171267708800 efficientnet_model.py:143] round_filter input=24 output=48 I1022 13:06:05.725413 140171267708800 efficientnet_model.py:143] round_filter input=40 output=80 I1022 13:06:06.354369 140171267708800 efficientnet_model.py:143] round_filter input=40 output=80 I1022 13:06:06.354544 140171267708800 efficientnet_model.py:143] round_filter input=80 output=160 I1022 13:06:07.288249 140171267708800 efficientnet_model.py:143] round_filter input=80 output=160 I1022 13:06:07.288431 140171267708800 efficientnet_model.py:143] round_filter input=112 output=224 I1022 13:06:08.213111 140171267708800 efficientnet_model.py:143] round_filter input=112 output=224 I1022 13:06:08.213303 140171267708800 efficientnet_model.py:143] round_filter input=192 output=384 I1022 13:06:09.388737 140171267708800 efficientnet_model.py:143] round_filter input=192 output=384 I1022 13:06:09.388911 140171267708800 efficientnet_model.py:143] round_filter input=320 output=640 I1022 13:06:09.781718 140171267708800 efficientnet_model.py:143] round_filter input=1280 output=2560 I1022 13:06:09.830326 140171267708800 efficientnet_model.py:453] Building model efficientnet with params ModelConfig(width_coefficient=2.0, depth_coefficient=3.1, resolution=600, dropout_rate=0.5, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32') INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_create_ssd_models_from_config): 40.06s I1022 13:06:09.948180 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_create_ssd_models_from_config): 40.06s [ OK ] ModelBuilderTF2Test.test_create_ssd_models_from_config [ RUN ] ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update): 0.0s I1022 13:06:09.976500 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update): 0.0s [ OK ] ModelBuilderTF2Test.test_invalid_faster_rcnn_batchnorm_update [ RUN ] ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold): 0.0s I1022 13:06:09.978418 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold): 0.0s [ OK ] ModelBuilderTF2Test.test_invalid_first_stage_nms_iou_threshold [ RUN ] ModelBuilderTF2Test.test_invalid_model_config_proto INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_invalid_model_config_proto): 0.0s I1022 13:06:09.979027 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_invalid_model_config_proto): 0.0s [ OK ] ModelBuilderTF2Test.test_invalid_model_config_proto [ RUN ] ModelBuilderTF2Test.test_invalid_second_stage_batch_size INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_invalid_second_stage_batch_size): 0.0s I1022 13:06:09.980589 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_invalid_second_stage_batch_size): 0.0s [ OK ] ModelBuilderTF2Test.test_invalid_second_stage_batch_size [ RUN ] ModelBuilderTF2Test.test_session [ SKIPPED ] ModelBuilderTF2Test.test_session [ RUN ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor): 0.0s I1022 13:06:09.981914 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor): 0.0s [ OK ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor [ RUN ] ModelBuilderTF2Test.test_unknown_meta_architecture INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_meta_architecture): 0.0s I1022 13:06:09.982362 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_unknown_meta_architecture): 0.0s [ OK ] ModelBuilderTF2Test.test_unknown_meta_architecture [ RUN ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_ssd_feature_extractor): 0.0s I1022 13:06:09.983437 140171267708800 test_util.py:2461] time(__main__.ModelBuilderTF2Test.test_unknown_ssd_feature_extractor): 0.0s [ OK ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor ---------------------------------------------------------------------- Ran 24 tests in 45.483s OK (skipped=1)
import object_detection
if os.name =='posix':
!wget {PRETRAINED_MODEL_URL}
!mv {PRETRAINED_MODEL_NAME+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME+'.tar.gz'}
if os.name == 'nt':
wget.download(PRETRAINED_MODEL_URL)
!move {PRETRAINED_MODEL_NAME+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME+'.tar.gz'}
if os.name =='posix':
!wget {PRETRAINED_MODEL_URL2}
!mv {PRETRAINED_MODEL_NAME2+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME2+'.tar.gz'}
if os.name == 'nt':
wget.download(PRETRAINED_MODEL_URL2)
!move {PRETRAINED_MODEL_NAME+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME2+'.tar.gz'}
if os.name =='posix':
!wget {PRETRAINED_MODEL_URL3}
!mv {PRETRAINED_MODEL_NAME3+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME3+'.tar.gz'}
if os.name == 'nt':
wget.download(PRETRAINED_MODEL_URL3)
!move {PRETRAINED_MODEL_NAME3+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME3+'.tar.gz'}
--2022-10-22 13:06:11-- http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz Resolving download.tensorflow.org (download.tensorflow.org)... 108.177.119.128, 2a00:1450:4013:c00::80 Connecting to download.tensorflow.org (download.tensorflow.org)|108.177.119.128|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 90453990 (86M) [application/x-tar] Saving to: ‘ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz’ ssd_mobilenet_v1_fp 100%[===================>] 86.26M 48.3MB/s in 1.8s 2022-10-22 13:06:13 (48.3 MB/s) - ‘ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz’ saved [90453990/90453990] ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/ ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/checkpoint/ ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0.data-00000-of-00001 ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/checkpoint/checkpoint ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0.index ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/pipeline.config ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/saved_model/ ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/saved_model/saved_model.pb ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/ ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/variables.data-00000-of-00001 ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/variables.index --2022-10-22 13:06:15-- http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz Resolving download.tensorflow.org (download.tensorflow.org)... 108.177.119.128, 2a00:1450:4013:c00::80 Connecting to download.tensorflow.org (download.tensorflow.org)|108.177.119.128|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 386527459 (369M) [application/x-tar] Saving to: ‘ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz’ ssd_resnet101_v1_fp 100%[===================>] 368.62M 113MB/s in 3.3s 2022-10-22 13:06:18 (113 MB/s) - ‘ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz’ saved [386527459/386527459] ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/ ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/checkpoint/ ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0.data-00000-of-00001 ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/checkpoint/checkpoint ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0.index ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/pipeline.config ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/ ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/saved_model.pb ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/assets/ ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/ ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/variables.data-00000-of-00001 ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/saved_model/variables/variables.index --2022-10-22 13:06:24-- http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz Resolving download.tensorflow.org (download.tensorflow.org)... 108.177.119.128, 2a00:1450:4013:c00::80 Connecting to download.tensorflow.org (download.tensorflow.org)|108.177.119.128|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 20518283 (20M) [application/x-tar] Saving to: ‘ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz’ ssd_mobilenet_v2_fp 100%[===================>] 19.57M 96.0MB/s in 0.2s 2022-10-22 13:06:25 (96.0 MB/s) - ‘ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz’ saved [20518283/20518283] ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/ ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint/ ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint/ckpt-0.data-00000-of-00001 ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint/checkpoint ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint/ckpt-0.index ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/pipeline.config ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/ ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/saved_model.pb ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/variables/ ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/variables/variables.data-00000-of-00001 ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/variables/variables.index
creating a lapmap.txt file which will help us for labling during testing and training.
labels = [{'name':'object', 'id':1}]
with open(files['LABELMAP'], 'w') as f:
for label in labels:
f.write('item { \n')
f.write('\tname:\'{}\'\n'.format(label['name']))
f.write('\tid:{}\n'.format(label['id']))
f.write('}\n')
Converting the images ato tfrecord (binary formate) by using csv annoations file. # 3. Create TF records
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
# OPTIONAL IF RUNNING ON COLAB
!unzip "/content/drive/MyDrive/inventory_images.zip"
Archive: /content/drive/MyDrive/inventory_images.zip creating: inventory_images/ inflating: __MACOSX/._inventory_images inflating: inventory_images/.DS_Store inflating: __MACOSX/inventory_images/._.DS_Store creating: inventory_images/test/ inflating: __MACOSX/inventory_images/._test inflating: inventory_images/Untitled.ipynb inflating: __MACOSX/inventory_images/._Untitled.ipynb inflating: inventory_images/train_ant.csv inflating: __MACOSX/inventory_images/._train_ant.csv creating: inventory_images/annotations/ inflating: __MACOSX/inventory_images/._annotations creating: inventory_images/train/ inflating: __MACOSX/inventory_images/._train creating: inventory_images/val/ inflating: __MACOSX/inventory_images/._val inflating: inventory_images/test/test_14.jpg inflating: __MACOSX/inventory_images/test/._test_14.jpg inflating: inventory_images/test/test_28.jpg inflating: __MACOSX/inventory_images/test/._test_28.jpg inflating: inventory_images/test/test_29.jpg inflating: __MACOSX/inventory_images/test/._test_29.jpg inflating: inventory_images/test/test_15.jpg inflating: __MACOSX/inventory_images/test/._test_15.jpg inflating: inventory_images/test/test_9.jpg inflating: __MACOSX/inventory_images/test/._test_9.jpg inflating: inventory_images/test/test_17.jpg inflating: __MACOSX/inventory_images/test/._test_17.jpg inflating: inventory_images/test/test_16.jpg inflating: __MACOSX/inventory_images/test/._test_16.jpg inflating: inventory_images/test/test_8.jpg inflating: __MACOSX/inventory_images/test/._test_8.jpg inflating: inventory_images/test/test_12.jpg inflating: __MACOSX/inventory_images/test/._test_12.jpg inflating: inventory_images/test/test_13.jpg inflating: __MACOSX/inventory_images/test/._test_13.jpg inflating: inventory_images/test/test_11.jpg inflating: __MACOSX/inventory_images/test/._test_11.jpg inflating: inventory_images/test/test_10.jpg inflating: __MACOSX/inventory_images/test/._test_10.jpg inflating: inventory_images/test/test_3.jpg inflating: __MACOSX/inventory_images/test/._test_3.jpg inflating: inventory_images/test/test_21.jpg inflating: __MACOSX/inventory_images/test/._test_21.jpg inflating: inventory_images/test/test_35.jpg inflating: __MACOSX/inventory_images/test/._test_35.jpg inflating: inventory_images/test/test_34.jpg inflating: __MACOSX/inventory_images/test/._test_34.jpg inflating: inventory_images/test/test_20.jpg inflating: __MACOSX/inventory_images/test/._test_20.jpg inflating: inventory_images/test/test_2.jpg inflating: __MACOSX/inventory_images/test/._test_2.jpg inflating: inventory_images/test/test_0.jpg inflating: __MACOSX/inventory_images/test/._test_0.jpg inflating: inventory_images/test/test_22.jpg inflating: __MACOSX/inventory_images/test/._test_22.jpg inflating: inventory_images/test/test_23.jpg inflating: __MACOSX/inventory_images/test/._test_23.jpg inflating: inventory_images/test/test_1.jpg inflating: __MACOSX/inventory_images/test/._test_1.jpg inflating: inventory_images/test/test_5.jpg inflating: __MACOSX/inventory_images/test/._test_5.jpg inflating: inventory_images/test/test_33.jpg inflating: __MACOSX/inventory_images/test/._test_33.jpg inflating: inventory_images/test/test_27.jpg inflating: __MACOSX/inventory_images/test/._test_27.jpg inflating: inventory_images/test/test_26.jpg inflating: __MACOSX/inventory_images/test/._test_26.jpg inflating: inventory_images/test/test_32.jpg inflating: __MACOSX/inventory_images/test/._test_32.jpg inflating: inventory_images/test/test_4.jpg inflating: __MACOSX/inventory_images/test/._test_4.jpg inflating: inventory_images/test/test_6.jpg inflating: __MACOSX/inventory_images/test/._test_6.jpg inflating: inventory_images/test/test_18.jpg inflating: __MACOSX/inventory_images/test/._test_18.jpg inflating: inventory_images/test/test_24.jpg inflating: __MACOSX/inventory_images/test/._test_24.jpg inflating: inventory_images/test/test_30.jpg inflating: __MACOSX/inventory_images/test/._test_30.jpg inflating: inventory_images/test/test_31.jpg inflating: __MACOSX/inventory_images/test/._test_31.jpg inflating: inventory_images/test/test_25.jpg inflating: __MACOSX/inventory_images/test/._test_25.jpg inflating: inventory_images/test/test_19.jpg inflating: __MACOSX/inventory_images/test/._test_19.jpg inflating: inventory_images/test/test_7.jpg inflating: __MACOSX/inventory_images/test/._test_7.jpg inflating: inventory_images/annotations/test_annotations.csv inflating: __MACOSX/inventory_images/annotations/._test_annotations.csv inflating: inventory_images/annotations/val_annotations.csv inflating: __MACOSX/inventory_images/annotations/._val_annotations.csv inflating: inventory_images/annotations/train_labels.csv inflating: inventory_images/annotations/train_annotations.csv inflating: __MACOSX/inventory_images/annotations/._train_annotations.csv inflating: inventory_images/train/train_199.jpg inflating: __MACOSX/inventory_images/train/._train_199.jpg inflating: inventory_images/train/train_166.jpg inflating: __MACOSX/inventory_images/train/._train_166.jpg inflating: inventory_images/train/train_172.jpg inflating: __MACOSX/inventory_images/train/._train_172.jpg inflating: inventory_images/train/train_22.jpg inflating: __MACOSX/inventory_images/train/._train_22.jpg inflating: inventory_images/train/train_36.jpg inflating: __MACOSX/inventory_images/train/._train_36.jpg inflating: inventory_images/train/train_37.jpg inflating: __MACOSX/inventory_images/train/._train_37.jpg inflating: inventory_images/train/train_23.jpg inflating: __MACOSX/inventory_images/train/._train_23.jpg inflating: inventory_images/train/train_173.jpg inflating: __MACOSX/inventory_images/train/._train_173.jpg inflating: inventory_images/train/train_167.jpg inflating: __MACOSX/inventory_images/train/._train_167.jpg inflating: inventory_images/train/train_198.jpg inflating: __MACOSX/inventory_images/train/._train_198.jpg inflating: inventory_images/train/train_171.jpg inflating: __MACOSX/inventory_images/train/._train_171.jpg inflating: inventory_images/train/train_165.jpg inflating: __MACOSX/inventory_images/train/._train_165.jpg inflating: inventory_images/train/train_159.jpg inflating: __MACOSX/inventory_images/train/._train_159.jpg inflating: inventory_images/train/train_35.jpg inflating: __MACOSX/inventory_images/train/._train_35.jpg inflating: inventory_images/train/train_20.jpg inflating: __MACOSX/inventory_images/train/._train_20.jpg inflating: inventory_images/train/train_34.jpg inflating: __MACOSX/inventory_images/train/._train_34.jpg inflating: inventory_images/train/train_158.jpg inflating: __MACOSX/inventory_images/train/._train_158.jpg inflating: inventory_images/train/train_164.jpg inflating: __MACOSX/inventory_images/train/._train_164.jpg inflating: inventory_images/train/train_170.jpg inflating: __MACOSX/inventory_images/train/._train_170.jpg inflating: inventory_images/train/train_148.jpg inflating: __MACOSX/inventory_images/train/._train_148.jpg inflating: inventory_images/train/train_174.jpg inflating: __MACOSX/inventory_images/train/._train_174.jpg inflating: inventory_images/train/train_160.jpg inflating: __MACOSX/inventory_images/train/._train_160.jpg inflating: inventory_images/train/train_9.jpg inflating: __MACOSX/inventory_images/train/._train_9.jpg inflating: inventory_images/train/train_30.jpg inflating: __MACOSX/inventory_images/train/._train_30.jpg inflating: inventory_images/train/.DS_Store inflating: __MACOSX/inventory_images/train/._.DS_Store inflating: inventory_images/train/train_24.jpg inflating: __MACOSX/inventory_images/train/._train_24.jpg inflating: inventory_images/train/train_18.jpg inflating: __MACOSX/inventory_images/train/._train_18.jpg inflating: inventory_images/train/train_19.jpg inflating: __MACOSX/inventory_images/train/._train_19.jpg inflating: inventory_images/train/train_25.jpg inflating: __MACOSX/inventory_images/train/._train_25.jpg inflating: inventory_images/train/train_31.jpg inflating: __MACOSX/inventory_images/train/._train_31.jpg inflating: inventory_images/train/train_8.jpg inflating: __MACOSX/inventory_images/train/._train_8.jpg inflating: inventory_images/train/train_161.jpg inflating: __MACOSX/inventory_images/train/._train_161.jpg inflating: inventory_images/train/train_175.jpg inflating: __MACOSX/inventory_images/train/._train_175.jpg inflating: inventory_images/train/train_149.jpg inflating: __MACOSX/inventory_images/train/._train_149.jpg inflating: inventory_images/train/train_188.jpg inflating: __MACOSX/inventory_images/train/._train_188.jpg inflating: inventory_images/train/train_163.jpg inflating: __MACOSX/inventory_images/train/._train_163.jpg inflating: inventory_images/train/train_177.jpg inflating: __MACOSX/inventory_images/train/._train_177.jpg inflating: inventory_images/train/train_27.jpg inflating: __MACOSX/inventory_images/train/._train_27.jpg inflating: inventory_images/train/train_33.jpg inflating: __MACOSX/inventory_images/train/._train_33.jpg inflating: inventory_images/train/train_200.jpg inflating: __MACOSX/inventory_images/train/._train_200.jpg inflating: inventory_images/train/train_32.jpg inflating: __MACOSX/inventory_images/train/._train_32.jpg inflating: inventory_images/train/train_26.jpg inflating: __MACOSX/inventory_images/train/._train_26.jpg inflating: inventory_images/train/train_176.jpg inflating: __MACOSX/inventory_images/train/._train_176.jpg inflating: inventory_images/train/train_162.jpg inflating: __MACOSX/inventory_images/train/._train_162.jpg inflating: inventory_images/train/train_189.jpg inflating: __MACOSX/inventory_images/train/._train_189.jpg inflating: inventory_images/train/train_105.jpg inflating: __MACOSX/inventory_images/train/._train_105.jpg inflating: inventory_images/train/train_111.jpg inflating: __MACOSX/inventory_images/train/._train_111.jpg inflating: inventory_images/train/train_139.jpg inflating: __MACOSX/inventory_images/train/._train_139.jpg inflating: inventory_images/train/train_69.jpg inflating: __MACOSX/inventory_images/train/._train_69.jpg inflating: inventory_images/train/train_41.jpg inflating: __MACOSX/inventory_images/train/._train_41.jpg inflating: inventory_images/train/train_55.jpg inflating: __MACOSX/inventory_images/train/._train_55.jpg inflating: inventory_images/train/train_54.jpg inflating: __MACOSX/inventory_images/train/._train_54.jpg inflating: inventory_images/train/train_68.jpg inflating: __MACOSX/inventory_images/train/._train_68.jpg inflating: inventory_images/train/train_138.jpg inflating: __MACOSX/inventory_images/train/._train_138.jpg inflating: inventory_images/train/train_110.jpg inflating: __MACOSX/inventory_images/train/._train_110.jpg inflating: inventory_images/train/train_104.jpg inflating: __MACOSX/inventory_images/train/._train_104.jpg inflating: inventory_images/train/train_112.jpg inflating: __MACOSX/inventory_images/train/._train_112.jpg inflating: inventory_images/train/train_106.jpg inflating: __MACOSX/inventory_images/train/._train_106.jpg inflating: inventory_images/train/train_56.jpg inflating: __MACOSX/inventory_images/train/._train_56.jpg inflating: inventory_images/train/train_42.jpg inflating: __MACOSX/inventory_images/train/._train_42.jpg inflating: inventory_images/train/train_43.jpg inflating: __MACOSX/inventory_images/train/._train_43.jpg inflating: inventory_images/train/train_57.jpg inflating: __MACOSX/inventory_images/train/._train_57.jpg inflating: inventory_images/train/train_107.jpg inflating: __MACOSX/inventory_images/train/._train_107.jpg inflating: inventory_images/train/train_113.jpg inflating: __MACOSX/inventory_images/train/._train_113.jpg inflating: inventory_images/train/train_117.jpg inflating: __MACOSX/inventory_images/train/._train_117.jpg inflating: inventory_images/train/train_103.jpg inflating: __MACOSX/inventory_images/train/._train_103.jpg inflating: inventory_images/train/train_53.jpg inflating: __MACOSX/inventory_images/train/._train_53.jpg inflating: inventory_images/train/train_47.jpg inflating: __MACOSX/inventory_images/train/._train_47.jpg inflating: inventory_images/train/train_46.jpg inflating: __MACOSX/inventory_images/train/._train_46.jpg inflating: inventory_images/train/train_52.jpg inflating: __MACOSX/inventory_images/train/._train_52.jpg inflating: inventory_images/train/train_102.jpg inflating: __MACOSX/inventory_images/train/._train_102.jpg inflating: inventory_images/train/train_116.jpg inflating: __MACOSX/inventory_images/train/._train_116.jpg inflating: inventory_images/train/train_128.jpg inflating: __MACOSX/inventory_images/train/._train_128.jpg inflating: inventory_images/train/train_100.jpg inflating: __MACOSX/inventory_images/train/._train_100.jpg inflating: inventory_images/train/train_114.jpg inflating: __MACOSX/inventory_images/train/._train_114.jpg inflating: inventory_images/train/train_44.jpg inflating: __MACOSX/inventory_images/train/._train_44.jpg inflating: inventory_images/train/train_50.jpg inflating: __MACOSX/inventory_images/train/._train_50.jpg inflating: inventory_images/train/train_51.jpg inflating: __MACOSX/inventory_images/train/._train_51.jpg inflating: inventory_images/train/train_45.jpg inflating: __MACOSX/inventory_images/train/._train_45.jpg inflating: inventory_images/train/train_115.jpg inflating: __MACOSX/inventory_images/train/._train_115.jpg inflating: inventory_images/train/train_101.jpg inflating: __MACOSX/inventory_images/train/._train_101.jpg inflating: inventory_images/train/train_129.jpg inflating: __MACOSX/inventory_images/train/._train_129.jpg inflating: inventory_images/train/train_124.jpg inflating: __MACOSX/inventory_images/train/._train_124.jpg inflating: inventory_images/train/train_130.jpg inflating: __MACOSX/inventory_images/train/._train_130.jpg inflating: inventory_images/train/train_118.jpg inflating: __MACOSX/inventory_images/train/._train_118.jpg inflating: inventory_images/train/train_48.jpg inflating: __MACOSX/inventory_images/train/._train_48.jpg inflating: inventory_images/train/train_60.jpg inflating: __MACOSX/inventory_images/train/._train_60.jpg inflating: inventory_images/train/train_74.jpg inflating: __MACOSX/inventory_images/train/._train_74.jpg inflating: inventory_images/train/train_61.jpg inflating: __MACOSX/inventory_images/train/._train_61.jpg inflating: inventory_images/train/train_49.jpg inflating: __MACOSX/inventory_images/train/._train_49.jpg inflating: inventory_images/train/train_119.jpg inflating: __MACOSX/inventory_images/train/._train_119.jpg inflating: inventory_images/train/train_131.jpg inflating: __MACOSX/inventory_images/train/._train_131.jpg inflating: inventory_images/train/train_125.jpg inflating: __MACOSX/inventory_images/train/._train_125.jpg inflating: inventory_images/train/train_133.jpg inflating: __MACOSX/inventory_images/train/._train_133.jpg inflating: inventory_images/train/train_127.jpg inflating: __MACOSX/inventory_images/train/._train_127.jpg inflating: inventory_images/train/train_63.jpg inflating: __MACOSX/inventory_images/train/._train_63.jpg inflating: inventory_images/train/train_62.jpg inflating: __MACOSX/inventory_images/train/._train_62.jpg inflating: inventory_images/train/train_126.jpg inflating: __MACOSX/inventory_images/train/._train_126.jpg inflating: inventory_images/train/train_132.jpg inflating: __MACOSX/inventory_images/train/._train_132.jpg inflating: inventory_images/train/train_136.jpg inflating: __MACOSX/inventory_images/train/._train_136.jpg inflating: inventory_images/train/train_122.jpg inflating: __MACOSX/inventory_images/train/._train_122.jpg inflating: inventory_images/train/train_72.jpg inflating: __MACOSX/inventory_images/train/._train_72.jpg inflating: inventory_images/train/train_66.jpg inflating: __MACOSX/inventory_images/train/._train_66.jpg inflating: inventory_images/train/train_67.jpg inflating: __MACOSX/inventory_images/train/._train_67.jpg inflating: inventory_images/train/train_73.jpg inflating: __MACOSX/inventory_images/train/._train_73.jpg inflating: inventory_images/train/train_123.jpg inflating: __MACOSX/inventory_images/train/._train_123.jpg inflating: inventory_images/train/train_137.jpg inflating: __MACOSX/inventory_images/train/._train_137.jpg inflating: inventory_images/train/train_109.jpg inflating: __MACOSX/inventory_images/train/._train_109.jpg inflating: inventory_images/train/train_121.jpg inflating: __MACOSX/inventory_images/train/._train_121.jpg inflating: inventory_images/train/train_135.jpg inflating: __MACOSX/inventory_images/train/._train_135.jpg inflating: inventory_images/train/train_65.jpg inflating: __MACOSX/inventory_images/train/._train_65.jpg inflating: inventory_images/train/train_71.jpg inflating: __MACOSX/inventory_images/train/._train_71.jpg inflating: inventory_images/train/train_59.jpg inflating: __MACOSX/inventory_images/train/._train_59.jpg inflating: inventory_images/train/train_58.jpg inflating: __MACOSX/inventory_images/train/._train_58.jpg inflating: inventory_images/train/train_70.jpg inflating: __MACOSX/inventory_images/train/._train_70.jpg inflating: inventory_images/train/train_64.jpg inflating: __MACOSX/inventory_images/train/._train_64.jpg inflating: inventory_images/train/train_134.jpg inflating: __MACOSX/inventory_images/train/._train_134.jpg inflating: inventory_images/train/train_120.jpg inflating: __MACOSX/inventory_images/train/._train_120.jpg inflating: inventory_images/train/train_108.jpg inflating: __MACOSX/inventory_images/train/._train_108.jpg inflating: inventory_images/train/train_184.jpg inflating: __MACOSX/inventory_images/train/._train_184.jpg inflating: inventory_images/train/train_190.jpg inflating: __MACOSX/inventory_images/train/._train_190.jpg inflating: inventory_images/train/train_147.jpg inflating: __MACOSX/inventory_images/train/._train_147.jpg inflating: inventory_images/train/train_153.jpg inflating: __MACOSX/inventory_images/train/._train_153.jpg inflating: inventory_images/train/train_6.jpg inflating: __MACOSX/inventory_images/train/._train_6.jpg inflating: inventory_images/train/train_17.jpg inflating: __MACOSX/inventory_images/train/._train_17.jpg inflating: inventory_images/train/train_16.jpg inflating: __MACOSX/inventory_images/train/._train_16.jpg inflating: inventory_images/train/train_7.jpg inflating: __MACOSX/inventory_images/train/._train_7.jpg inflating: inventory_images/train/train_152.jpg inflating: __MACOSX/inventory_images/train/._train_152.jpg inflating: inventory_images/train/train_146.jpg inflating: __MACOSX/inventory_images/train/._train_146.jpg inflating: inventory_images/train/train_191.jpg inflating: __MACOSX/inventory_images/train/._train_191.jpg inflating: inventory_images/train/train_185.jpg inflating: __MACOSX/inventory_images/train/._train_185.jpg inflating: inventory_images/train/train_193.jpg inflating: __MACOSX/inventory_images/train/._train_193.jpg inflating: inventory_images/train/train_187.jpg inflating: __MACOSX/inventory_images/train/._train_187.jpg inflating: inventory_images/train/train_150.jpg inflating: __MACOSX/inventory_images/train/._train_150.jpg inflating: inventory_images/train/train_144.jpg inflating: __MACOSX/inventory_images/train/._train_144.jpg inflating: inventory_images/train/train_178.jpg inflating: __MACOSX/inventory_images/train/._train_178.jpg inflating: inventory_images/train/train_5.jpg inflating: __MACOSX/inventory_images/train/._train_5.jpg inflating: inventory_images/train/train_28.jpg inflating: __MACOSX/inventory_images/train/._train_28.jpg inflating: inventory_images/train/train_14.jpg inflating: __MACOSX/inventory_images/train/._train_14.jpg inflating: inventory_images/train/train_15.jpg inflating: __MACOSX/inventory_images/train/._train_15.jpg inflating: inventory_images/train/train_29.jpg inflating: __MACOSX/inventory_images/train/._train_29.jpg inflating: inventory_images/train/train_4.jpg inflating: __MACOSX/inventory_images/train/._train_4.jpg inflating: inventory_images/train/train_179.jpg inflating: __MACOSX/inventory_images/train/._train_179.jpg inflating: inventory_images/train/train_145.jpg inflating: __MACOSX/inventory_images/train/._train_145.jpg inflating: inventory_images/train/train_151.jpg inflating: __MACOSX/inventory_images/train/._train_151.jpg inflating: inventory_images/train/train_186.jpg inflating: __MACOSX/inventory_images/train/._train_186.jpg inflating: inventory_images/train/train_192.jpg inflating: __MACOSX/inventory_images/train/._train_192.jpg inflating: inventory_images/train/train_196.jpg inflating: __MACOSX/inventory_images/train/._train_196.jpg inflating: inventory_images/train/train_182.jpg inflating: __MACOSX/inventory_images/train/._train_182.jpg inflating: inventory_images/train/train_169.jpg inflating: __MACOSX/inventory_images/train/._train_169.jpg inflating: inventory_images/train/train_155.jpg inflating: __MACOSX/inventory_images/train/._train_155.jpg inflating: inventory_images/train/train_141.jpg inflating: __MACOSX/inventory_images/train/._train_141.jpg inflating: inventory_images/train/train_0.jpg inflating: __MACOSX/inventory_images/train/._train_0.jpg inflating: inventory_images/train/train_11.jpg inflating: __MACOSX/inventory_images/train/._train_11.jpg inflating: inventory_images/train/train_39.jpg inflating: __MACOSX/inventory_images/train/._train_39.jpg inflating: inventory_images/train/train_38.jpg inflating: __MACOSX/inventory_images/train/._train_38.jpg inflating: inventory_images/train/train_10.jpg inflating: __MACOSX/inventory_images/train/._train_10.jpg inflating: inventory_images/train/train_1.jpg inflating: __MACOSX/inventory_images/train/._train_1.jpg inflating: inventory_images/train/train_140.jpg inflating: __MACOSX/inventory_images/train/._train_140.jpg inflating: inventory_images/train/train_154.jpg inflating: __MACOSX/inventory_images/train/._train_154.jpg inflating: inventory_images/train/train_168.jpg inflating: __MACOSX/inventory_images/train/._train_168.jpg inflating: inventory_images/train/train_183.jpg inflating: __MACOSX/inventory_images/train/._train_183.jpg inflating: inventory_images/train/train_197.jpg inflating: __MACOSX/inventory_images/train/._train_197.jpg inflating: inventory_images/train/train_181.jpg inflating: __MACOSX/inventory_images/train/._train_181.jpg inflating: inventory_images/train/train_195.jpg inflating: __MACOSX/inventory_images/train/._train_195.jpg inflating: inventory_images/train/train_142.jpg inflating: __MACOSX/inventory_images/train/._train_142.jpg inflating: inventory_images/train/train_156.jpg inflating: __MACOSX/inventory_images/train/._train_156.jpg inflating: inventory_images/train/train_3.jpg inflating: __MACOSX/inventory_images/train/._train_3.jpg inflating: inventory_images/train/train_12.jpg inflating: __MACOSX/inventory_images/train/._train_12.jpg inflating: inventory_images/train/train_13.jpg inflating: __MACOSX/inventory_images/train/._train_13.jpg inflating: inventory_images/train/train_2.jpg inflating: __MACOSX/inventory_images/train/._train_2.jpg inflating: inventory_images/train/train_157.jpg inflating: __MACOSX/inventory_images/train/._train_157.jpg inflating: inventory_images/train/train_143.jpg inflating: __MACOSX/inventory_images/train/._train_143.jpg inflating: inventory_images/train/train_194.jpg inflating: __MACOSX/inventory_images/train/._train_194.jpg inflating: inventory_images/train/train_180.jpg inflating: __MACOSX/inventory_images/train/._train_180.jpg inflating: inventory_images/val/val_13.jpg inflating: __MACOSX/inventory_images/val/._val_13.jpg inflating: inventory_images/val/val_9.jpg inflating: __MACOSX/inventory_images/val/._val_9.jpg inflating: inventory_images/val/val_8.jpg inflating: __MACOSX/inventory_images/val/._val_8.jpg inflating: inventory_images/val/val_12.jpg inflating: __MACOSX/inventory_images/val/._val_12.jpg inflating: inventory_images/val/val_10.jpg inflating: __MACOSX/inventory_images/val/._val_10.jpg inflating: inventory_images/val/val_38.jpg inflating: __MACOSX/inventory_images/val/._val_38.jpg inflating: inventory_images/val/val_39.jpg inflating: __MACOSX/inventory_images/val/._val_39.jpg inflating: inventory_images/val/val_11.jpg inflating: __MACOSX/inventory_images/val/._val_11.jpg inflating: inventory_images/val/val_29.jpg inflating: __MACOSX/inventory_images/val/._val_29.jpg inflating: inventory_images/val/val_15.jpg inflating: __MACOSX/inventory_images/val/._val_15.jpg inflating: inventory_images/val/val_14.jpg inflating: __MACOSX/inventory_images/val/._val_14.jpg inflating: inventory_images/val/val_28.jpg inflating: __MACOSX/inventory_images/val/._val_28.jpg inflating: inventory_images/val/val_16.jpg inflating: __MACOSX/inventory_images/val/._val_16.jpg inflating: inventory_images/val/val_17.jpg inflating: __MACOSX/inventory_images/val/._val_17.jpg inflating: inventory_images/val/val_70.jpg inflating: __MACOSX/inventory_images/val/._val_70.jpg inflating: inventory_images/val/val_64.jpg inflating: __MACOSX/inventory_images/val/._val_64.jpg inflating: inventory_images/val/val_58.jpg inflating: __MACOSX/inventory_images/val/._val_58.jpg inflating: inventory_images/val/val_59.jpg inflating: __MACOSX/inventory_images/val/._val_59.jpg inflating: inventory_images/val/val_65.jpg inflating: __MACOSX/inventory_images/val/._val_65.jpg inflating: inventory_images/val/val_71.jpg inflating: __MACOSX/inventory_images/val/._val_71.jpg inflating: inventory_images/val/val_67.jpg inflating: __MACOSX/inventory_images/val/._val_67.jpg inflating: inventory_images/val/val_73.jpg inflating: __MACOSX/inventory_images/val/._val_73.jpg inflating: inventory_images/val/val_72.jpg inflating: __MACOSX/inventory_images/val/._val_72.jpg inflating: inventory_images/val/val_66.jpg inflating: __MACOSX/inventory_images/val/._val_66.jpg inflating: inventory_images/val/val_62.jpg inflating: __MACOSX/inventory_images/val/._val_62.jpg inflating: inventory_images/val/val_63.jpg inflating: __MACOSX/inventory_images/val/._val_63.jpg inflating: inventory_images/val/val_49.jpg inflating: __MACOSX/inventory_images/val/._val_49.jpg inflating: inventory_images/val/val_75.jpg inflating: __MACOSX/inventory_images/val/._val_75.jpg inflating: inventory_images/val/val_61.jpg inflating: __MACOSX/inventory_images/val/._val_61.jpg inflating: inventory_images/val/val_60.jpg inflating: __MACOSX/inventory_images/val/._val_60.jpg inflating: inventory_images/val/val_74.jpg inflating: __MACOSX/inventory_images/val/._val_74.jpg inflating: inventory_images/val/val_48.jpg inflating: __MACOSX/inventory_images/val/._val_48.jpg inflating: inventory_images/val/val_51.jpg inflating: __MACOSX/inventory_images/val/._val_51.jpg inflating: inventory_images/val/val_45.jpg inflating: __MACOSX/inventory_images/val/._val_45.jpg inflating: inventory_images/val/val_44.jpg inflating: __MACOSX/inventory_images/val/._val_44.jpg inflating: inventory_images/val/val_50.jpg inflating: __MACOSX/inventory_images/val/._val_50.jpg inflating: inventory_images/val/val_46.jpg inflating: __MACOSX/inventory_images/val/._val_46.jpg inflating: inventory_images/val/val_52.jpg inflating: __MACOSX/inventory_images/val/._val_52.jpg inflating: inventory_images/val/val_53.jpg inflating: __MACOSX/inventory_images/val/._val_53.jpg inflating: inventory_images/val/val_47.jpg inflating: __MACOSX/inventory_images/val/._val_47.jpg inflating: inventory_images/val/val_43.jpg inflating: __MACOSX/inventory_images/val/._val_43.jpg inflating: inventory_images/val/val_57.jpg inflating: __MACOSX/inventory_images/val/._val_57.jpg inflating: inventory_images/val/val_56.jpg inflating: __MACOSX/inventory_images/val/._val_56.jpg inflating: inventory_images/val/val_42.jpg inflating: __MACOSX/inventory_images/val/._val_42.jpg inflating: inventory_images/val/val_68.jpg inflating: __MACOSX/inventory_images/val/._val_68.jpg inflating: inventory_images/val/val_54.jpg inflating: __MACOSX/inventory_images/val/._val_54.jpg inflating: inventory_images/val/val_40.jpg inflating: __MACOSX/inventory_images/val/._val_40.jpg inflating: inventory_images/val/val_41.jpg inflating: __MACOSX/inventory_images/val/._val_41.jpg inflating: inventory_images/val/val_55.jpg inflating: __MACOSX/inventory_images/val/._val_55.jpg inflating: inventory_images/val/val_69.jpg inflating: __MACOSX/inventory_images/val/._val_69.jpg inflating: inventory_images/val/val_32.jpg inflating: __MACOSX/inventory_images/val/._val_32.jpg inflating: inventory_images/val/val_26.jpg inflating: __MACOSX/inventory_images/val/._val_26.jpg inflating: inventory_images/val/val_1.jpg inflating: __MACOSX/inventory_images/val/._val_1.jpg inflating: inventory_images/val/val_27.jpg inflating: __MACOSX/inventory_images/val/._val_27.jpg inflating: inventory_images/val/val_33.jpg inflating: __MACOSX/inventory_images/val/._val_33.jpg inflating: inventory_images/val/val_25.jpg inflating: __MACOSX/inventory_images/val/._val_25.jpg inflating: inventory_images/val/val_31.jpg inflating: __MACOSX/inventory_images/val/._val_31.jpg inflating: inventory_images/val/val_19.jpg inflating: __MACOSX/inventory_images/val/._val_19.jpg inflating: inventory_images/val/val_3.jpg inflating: __MACOSX/inventory_images/val/._val_3.jpg inflating: inventory_images/val/val_2.jpg inflating: __MACOSX/inventory_images/val/._val_2.jpg inflating: inventory_images/val/val_18.jpg inflating: __MACOSX/inventory_images/val/._val_18.jpg inflating: inventory_images/val/val_30.jpg inflating: __MACOSX/inventory_images/val/._val_30.jpg inflating: inventory_images/val/val_24.jpg inflating: __MACOSX/inventory_images/val/._val_24.jpg inflating: inventory_images/val/val_20.jpg inflating: __MACOSX/inventory_images/val/._val_20.jpg inflating: inventory_images/val/val_34.jpg inflating: __MACOSX/inventory_images/val/._val_34.jpg inflating: inventory_images/val/val_6.jpg inflating: __MACOSX/inventory_images/val/._val_6.jpg inflating: inventory_images/val/val_7.jpg inflating: __MACOSX/inventory_images/val/._val_7.jpg inflating: inventory_images/val/val_35.jpg inflating: __MACOSX/inventory_images/val/._val_35.jpg inflating: inventory_images/val/val_21.jpg inflating: __MACOSX/inventory_images/val/._val_21.jpg inflating: inventory_images/val/val_37.jpg inflating: __MACOSX/inventory_images/val/._val_37.jpg inflating: inventory_images/val/val_23.jpg inflating: __MACOSX/inventory_images/val/._val_23.jpg inflating: inventory_images/val/val_5.jpg inflating: __MACOSX/inventory_images/val/._val_5.jpg inflating: inventory_images/val/val_4.jpg inflating: __MACOSX/inventory_images/val/._val_4.jpg inflating: inventory_images/val/val_22.jpg inflating: __MACOSX/inventory_images/val/._val_22.jpg inflating: inventory_images/val/val_36.jpg inflating: __MACOSX/inventory_images/val/._val_36.jpg
if not os.path.exists(files['TF_RECORD_SCRIPT']):
!git clone https://github.com/dilshad-geol/TF {paths['SCRIPTS_PATH']}
Cloning into 'Tensorflow/scripts'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 3 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), done.
!python /content/Tensorflow/scripts/generate_tfrecord.py --csv_input=/content/inventory_images/annotations/train_annotations.csv --output_path=/content/Tensorflow/workspace/annotations/train.record --image_dir=/content/inventory_images/train
2022-10-22 13:07:07.898580: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2022-10-22 13:07:08.636133: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:07:08.636262: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:07:08.636305: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Successfully created the TFRecords: /content/Tensorflow/workspace/annotations/train.record
!python /content/Tensorflow/scripts/generate_tfrecord.py --csv_input=/content/inventory_images/annotations/test_annotations.csv --output_path=/content/Tensorflow/workspace/annotations/test.record --image_dir=/content/inventory_images/test
2022-10-22 13:07:15.947578: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2022-10-22 13:07:16.658679: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:07:16.658797: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2022-10-22 13:07:16.658819: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Successfully created the TFRecords: /content/Tensorflow/workspace/annotations/test.record
!pip install pytz
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (2022.4)
if os.name =='posix':
!cp {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH'])}
if os.name == 'nt':
!copy {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH'])}
if os.name =='posix':
!cp {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME2, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH2'])}
if os.name == 'nt':
!copy {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME2, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH2'])}
if os.name =='posix':
!cp {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME3, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH3'])}
if os.name == 'nt':
!copy {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME3, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH3'])}
Here are updateing our config according to our requirement. (Update batch number, place the training and testing tfrecord address, epochs for Model training.)
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.protos import pipeline_pb2
from google.protobuf import text_format
config = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG'])
config2 = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG2'])
config3 = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG3'])
print(config)
print(config2)
print(config3)
{'model': ssd {
num_classes: 90
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_mobilenet_v1_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.9999998989515007e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.9999998989515007e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
depth: 256
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.599999904632568
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 9.99999993922529e-09
iou_threshold: 0.6000000238418579
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
, 'train_config': batch_size: 64
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.03999999910593033
total_steps: 25000
warmup_learning_rate: 0.013333000242710114
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.8999999761581421
}
use_moving_average: false
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
num_steps: 25000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "classification"
fine_tune_checkpoint_version: V2
, 'train_input_config': tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
, 'eval_config': metrics_set: "coco_detection_metrics"
use_moving_averages: false
batch_size: 1
, 'eval_input_configs': [label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
], 'eval_input_config': label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
}
{'model': ssd {
num_classes: 90
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_resnet101_v1_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.00039999998989515007
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.029999999329447746
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.00039999998989515007
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
depth: 256
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.599999904632568
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 9.99999993922529e-09
iou_threshold: 0.6000000238418579
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
, 'train_config': batch_size: 64
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.03999999910593033
total_steps: 25000
warmup_learning_rate: 0.013333000242710114
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.8999999761581421
}
use_moving_average: false
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
num_steps: 25000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "classification"
use_bfloat16: true
fine_tune_checkpoint_version: V2
, 'train_input_config': label_map_path: "PATH_TO_BE_CONFIGURED"
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
, 'eval_config': metrics_set: "coco_detection_metrics"
use_moving_averages: false
, 'eval_input_configs': [label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
], 'eval_input_config': label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
}
{'model': ssd {
num_classes: 90
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_mobilenet_v2_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.9999998989515007e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
use_depthwise: true
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
additional_layer_depth: 128
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.9999998989515007e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.009999999776482582
}
}
activation: RELU_6
batch_norm {
decay: 0.996999979019165
scale: true
epsilon: 0.0010000000474974513
}
}
depth: 128
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.599999904632568
share_prediction_tower: true
use_depthwise: true
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 9.99999993922529e-09
iou_threshold: 0.6000000238418579
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
, 'train_config': batch_size: 128
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.07999999821186066
total_steps: 50000
warmup_learning_rate: 0.026666000485420227
warmup_steps: 1000
}
}
momentum_optimizer_value: 0.8999999761581421
}
use_moving_average: false
}
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
num_steps: 50000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "classification"
fine_tune_checkpoint_version: V2
, 'train_input_config': label_map_path: "PATH_TO_BE_CONFIGURED"
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
, 'eval_config': metrics_set: "coco_detection_metrics"
use_moving_averages: false
, 'eval_input_configs': [label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
], 'eval_input_config': label_map_path: "PATH_TO_BE_CONFIGURED"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "PATH_TO_BE_CONFIGURED"
}
}
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(files['PIPELINE_CONFIG'], "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)
pipeline_config2 = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(files['PIPELINE_CONFIG2'], "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config2)
pipeline_config3 = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(files['PIPELINE_CONFIG3'], "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config3)
pipeline_config.model.ssd.num_classes = 1
pipeline_config.train_config.batch_size = 4
pipeline_config.train_config.fine_tune_checkpoint = os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'checkpoint', 'ckpt-0')
pipeline_config.train_config.fine_tune_checkpoint_type = "detection"
pipeline_config.train_input_reader.label_map_path= files['LABELMAP']
pipeline_config.train_input_reader.tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'train.record')]
pipeline_config.eval_input_reader[0].label_map_path = files['LABELMAP']
pipeline_config.eval_input_reader[0].tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'test.record')]
pipeline_config2.model.ssd.num_classes = 1
pipeline_config2.train_config.batch_size = 4
pipeline_config2.train_config.fine_tune_checkpoint = os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME2, 'checkpoint', 'ckpt-0')
pipeline_config2.train_config.fine_tune_checkpoint_type = "detection"
pipeline_config2.train_input_reader.label_map_path= files['LABELMAP']
pipeline_config2.train_input_reader.tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'train.record')]
pipeline_config2.eval_input_reader[0].label_map_path = files['LABELMAP']
pipeline_config2.eval_input_reader[0].tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'test.record')]
pipeline_config3.model.ssd.num_classes = 1
pipeline_config3.train_config.batch_size = 4
pipeline_config3.train_config.fine_tune_checkpoint = os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME3, 'checkpoint', 'ckpt-0')
pipeline_config3.train_config.fine_tune_checkpoint_type = "detection"
pipeline_config3.train_input_reader.label_map_path= files['LABELMAP']
pipeline_config3.train_input_reader.tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'train.record')]
pipeline_config3.eval_input_reader[0].label_map_path = files['LABELMAP']
pipeline_config3.eval_input_reader[0].tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'test.record')]
config_text = text_format.MessageToString(pipeline_config)
with tf.io.gfile.GFile(files['PIPELINE_CONFIG'], "wb") as f:
f.write(config_text)
config_text2 = text_format.MessageToString(pipeline_config2)
with tf.io.gfile.GFile(files['PIPELINE_CONFIG2'], "wb") as f:
f.write(config_text2)
config_text3 = text_format.MessageToString(pipeline_config3)
with tf.io.gfile.GFile(files['PIPELINE_CONFIG3'], "wb") as f:
f.write(config_text3)
print(config_text)
print(config_text2)
print(config_text3)
model {
ssd {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_mobilenet_v1_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
depth: 256
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.6
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 1e-08
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
}
train_config {
batch_size: 4
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.04
total_steps: 25000
warmup_learning_rate: 0.013333
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
fine_tune_checkpoint: "Tensorflow/workspace/pre-trained-models/ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0"
num_steps: 25000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection"
fine_tune_checkpoint_version: V2
}
train_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/train.record"
}
}
eval_config {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
batch_size: 1
}
eval_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/test.record"
}
}
model {
ssd {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_resnet101_v1_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.0004
}
}
initializer {
truncated_normal_initializer {
mean: 0.0
stddev: 0.03
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 0.0004
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
depth: 256
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.6
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 1e-08
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
}
train_config {
batch_size: 4
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.04
total_steps: 25000
warmup_learning_rate: 0.013333
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
fine_tune_checkpoint: "Tensorflow/workspace/pre-trained-models/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0"
num_steps: 25000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection"
use_bfloat16: true
fine_tune_checkpoint_version: V2
}
train_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/train.record"
}
}
eval_config {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}
eval_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/test.record"
}
}
model {
ssd {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
feature_extractor {
type: "ssd_mobilenet_v2_fpn_keras"
depth_multiplier: 1.0
min_depth: 16
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
use_depthwise: true
override_base_feature_extractor_hyperparams: true
fpn {
min_level: 3
max_level: 7
additional_layer_depth: 128
}
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 4e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.01
}
}
activation: RELU_6
batch_norm {
decay: 0.997
scale: true
epsilon: 0.001
}
}
depth: 128
num_layers_before_predictor: 4
kernel_size: 3
class_prediction_bias_init: -4.6
share_prediction_tower: true
use_depthwise: true
}
}
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
scales_per_octave: 2
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 1e-08
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
use_static_shapes: false
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.25
}
}
classification_weight: 1.0
localization_weight: 1.0
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
}
train_config {
batch_size: 4
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.08
total_steps: 50000
warmup_learning_rate: 0.026666
warmup_steps: 1000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
fine_tune_checkpoint: "Tensorflow/workspace/pre-trained-models/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint/ckpt-0"
num_steps: 50000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
fine_tune_checkpoint_type: "detection"
fine_tune_checkpoint_version: V2
}
train_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/train.record"
}
}
eval_config {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}
eval_input_reader {
label_map_path: "Tensorflow/workspace/annotations/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "Tensorflow/workspace/annotations/test.record"
}
}
Before we begin training our model, let’s go and copy the TensorFlow/models/research/object_detection/model_main_tf2.py script and paste it straight into our training_demo folder. We will need this script in order to train our model.
TRAINING_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'model_main_tf2.py')
command = "python {} --model_dir={} --pipeline_config_path={} --num_train_steps=5000".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH'],files['PIPELINE_CONFIG'])
command2 = "python {} --model_dir={} --pipeline_config_path={} --num_train_steps=5000".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH2'],files['PIPELINE_CONFIG2'])
command3 = "python {} --model_dir={} --pipeline_config_path={} --num_train_steps=5000".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH3'],files['PIPELINE_CONFIG3'])
print(command)
print(command2)
print(command3)
python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_mobilenet_v1 --pipeline_config_path=Tensorflow/workspace/models/ssd_mobilenet_v1/pipeline.config --num_train_steps=5000 python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_resnet101_v1 --pipeline_config_path=Tensorflow/workspace/models/ssd_resnet101_v1/pipeline.config --num_train_steps=5000 python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite --pipeline_config_path=Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/pipeline.config --num_train_steps=5000
!apt install --allow-change-held-packages libcudnn8=8.1.0.77-1+cuda11.2
Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-460 Use 'apt autoremove' to remove it. The following packages will be REMOVED: libcudnn8-dev The following held packages will be changed: libcudnn8 The following packages will be DOWNGRADED: libcudnn8 0 upgraded, 0 newly installed, 1 downgraded, 1 to remove and 25 not upgraded. Need to get 430 MB of archives. After this operation, 1,392 MB disk space will be freed. Get:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 libcudnn8 8.1.0.77-1+cuda11.2 [430 MB] Fetched 430 MB in 7s (61.9 MB/s) (Reading database ... 123942 files and directories currently installed.) Removing libcudnn8-dev (8.1.1.33-1+cuda11.2) ... update-alternatives: removing manually selected alternative - switching libcudnn to auto mode dpkg: warning: downgrading libcudnn8 from 8.1.1.33-1+cuda11.2 to 8.1.0.77-1+cuda11.2 (Reading database ... 123919 files and directories currently installed.) Preparing to unpack .../libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb ... Unpacking libcudnn8 (8.1.0.77-1+cuda11.2) over (8.1.1.33-1+cuda11.2) ... Setting up libcudnn8 (8.1.0.77-1+cuda11.2) ...
!{command} #training model ssd_mobilenet_v1
2022-10-22 06:24:32.629870: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 06:24:33.433338: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 06:24:33.433508: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 06:24:33.433529: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-10-22 06:24:37.193464: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I1022 06:24:37.342535 139983244863360 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 5000
I1022 06:24:37.348289 139983244863360 config_util.py:552] Maybe overwriting train_steps: 5000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 06:24:37.348471 139983244863360 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W1022 06:24:37.383493 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
I1022 06:24:37.403078 139983244863360 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
I1022 06:24:37.407122 139983244863360 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 06:24:37.407246 139983244863360 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 06:24:37.407319 139983244863360 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 06:24:37.433511 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 06:24:37.466846 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 06:24:44.242761 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W1022 06:24:47.047674 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 06:24:48.603366 139983244863360 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
2022-10-22 06:24:53.478460: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 23970816 exceeds 10% of free system memory.
2022-10-22 06:24:53.478664: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 29153280 exceeds 10% of free system memory.
2022-10-22 06:24:53.537490: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 23970816 exceeds 10% of free system memory.
2022-10-22 06:24:53.564822: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 23970816 exceeds 10% of free system memory.
2022-10-22 06:24:53.618798: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 38340864 exceeds 10% of free system memory.
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.124801 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.127695 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.130281 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.131281 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.134683 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.135781 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.138321 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.139286 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.142559 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 06:25:24.143599 139983244863360 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
W1022 06:25:25.162433 139978797467392 deprecation.py:560] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 100 per-step time 0.810s
I1022 06:26:45.892790 139983244863360 model_lib_v2.py:707] Step 100 per-step time 0.810s
INFO:tensorflow:{'Loss/classification_loss': 0.30057523,
'Loss/localization_loss': 0.2725774,
'Loss/regularization_loss': 0.7740204,
'Loss/total_loss': 1.347173,
'learning_rate': 0.014666351}
I1022 06:26:45.893249 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.30057523,
'Loss/localization_loss': 0.2725774,
'Loss/regularization_loss': 0.7740204,
'Loss/total_loss': 1.347173,
'learning_rate': 0.014666351}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 200 per-step time 0.471s
I1022 06:27:33.011031 139983244863360 model_lib_v2.py:707] Step 200 per-step time 0.471s
INFO:tensorflow:{'Loss/classification_loss': 0.2646864,
'Loss/localization_loss': 0.26638535,
'Loss/regularization_loss': 0.7731561,
'Loss/total_loss': 1.3042278,
'learning_rate': 0.0159997}
I1022 06:27:33.011467 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.2646864,
'Loss/localization_loss': 0.26638535,
'Loss/regularization_loss': 0.7731561,
'Loss/total_loss': 1.3042278,
'learning_rate': 0.0159997}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 300 per-step time 0.448s
I1022 06:28:17.849338 139983244863360 model_lib_v2.py:707] Step 300 per-step time 0.448s
INFO:tensorflow:{'Loss/classification_loss': 0.268364,
'Loss/localization_loss': 0.25088456,
'Loss/regularization_loss': 0.77219737,
'Loss/total_loss': 1.291446,
'learning_rate': 0.01733305}
I1022 06:28:17.849758 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.268364,
'Loss/localization_loss': 0.25088456,
'Loss/regularization_loss': 0.77219737,
'Loss/total_loss': 1.291446,
'learning_rate': 0.01733305}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 400 per-step time 0.514s
I1022 06:29:09.283489 139983244863360 model_lib_v2.py:707] Step 400 per-step time 0.514s
INFO:tensorflow:{'Loss/classification_loss': 0.25289258,
'Loss/localization_loss': 0.15439151,
'Loss/regularization_loss': 0.7711736,
'Loss/total_loss': 1.1784577,
'learning_rate': 0.0186664}
I1022 06:29:09.284005 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.25289258,
'Loss/localization_loss': 0.15439151,
'Loss/regularization_loss': 0.7711736,
'Loss/total_loss': 1.1784577,
'learning_rate': 0.0186664}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 500 per-step time 0.494s
I1022 06:29:58.690702 139983244863360 model_lib_v2.py:707] Step 500 per-step time 0.494s
INFO:tensorflow:{'Loss/classification_loss': 0.21512061,
'Loss/localization_loss': 0.15732116,
'Loss/regularization_loss': 0.7700761,
'Loss/total_loss': 1.1425178,
'learning_rate': 0.01999975}
I1022 06:29:58.691074 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.21512061,
'Loss/localization_loss': 0.15732116,
'Loss/regularization_loss': 0.7700761,
'Loss/total_loss': 1.1425178,
'learning_rate': 0.01999975}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 600 per-step time 0.458s
I1022 06:30:44.536005 139983244863360 model_lib_v2.py:707] Step 600 per-step time 0.458s
INFO:tensorflow:{'Loss/classification_loss': 0.24914977,
'Loss/localization_loss': 0.18645193,
'Loss/regularization_loss': 0.76889944,
'Loss/total_loss': 1.2045012,
'learning_rate': 0.0213331}
I1022 06:30:44.536397 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.24914977,
'Loss/localization_loss': 0.18645193,
'Loss/regularization_loss': 0.76889944,
'Loss/total_loss': 1.2045012,
'learning_rate': 0.0213331}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 700 per-step time 0.469s
I1022 06:31:31.454004 139983244863360 model_lib_v2.py:707] Step 700 per-step time 0.469s
INFO:tensorflow:{'Loss/classification_loss': 0.22543749,
'Loss/localization_loss': 0.13388939,
'Loss/regularization_loss': 0.76766175,
'Loss/total_loss': 1.1269886,
'learning_rate': 0.02266645}
I1022 06:31:31.454389 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.22543749,
'Loss/localization_loss': 0.13388939,
'Loss/regularization_loss': 0.76766175,
'Loss/total_loss': 1.1269886,
'learning_rate': 0.02266645}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 800 per-step time 0.448s
I1022 06:32:16.264830 139983244863360 model_lib_v2.py:707] Step 800 per-step time 0.448s
INFO:tensorflow:{'Loss/classification_loss': 0.19838576,
'Loss/localization_loss': 0.1537438,
'Loss/regularization_loss': 0.7663388,
'Loss/total_loss': 1.1184684,
'learning_rate': 0.023999799}
I1022 06:32:16.265249 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.19838576,
'Loss/localization_loss': 0.1537438,
'Loss/regularization_loss': 0.7663388,
'Loss/total_loss': 1.1184684,
'learning_rate': 0.023999799}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 900 per-step time 0.426s
I1022 06:32:58.852374 139983244863360 model_lib_v2.py:707] Step 900 per-step time 0.426s
INFO:tensorflow:{'Loss/classification_loss': 0.21417464,
'Loss/localization_loss': 0.13476981,
'Loss/regularization_loss': 0.7649609,
'Loss/total_loss': 1.1139053,
'learning_rate': 0.025333151}
I1022 06:32:58.852743 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.21417464,
'Loss/localization_loss': 0.13476981,
'Loss/regularization_loss': 0.7649609,
'Loss/total_loss': 1.1139053,
'learning_rate': 0.025333151}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1000 per-step time 0.475s
I1022 06:33:46.348976 139983244863360 model_lib_v2.py:707] Step 1000 per-step time 0.475s
INFO:tensorflow:{'Loss/classification_loss': 0.20125318,
'Loss/localization_loss': 0.1586676,
'Loss/regularization_loss': 0.7634925,
'Loss/total_loss': 1.1234133,
'learning_rate': 0.0266665}
I1022 06:33:46.349357 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.20125318,
'Loss/localization_loss': 0.1586676,
'Loss/regularization_loss': 0.7634925,
'Loss/total_loss': 1.1234133,
'learning_rate': 0.0266665}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1100 per-step time 0.434s
I1022 06:34:29.808237 139983244863360 model_lib_v2.py:707] Step 1100 per-step time 0.434s
INFO:tensorflow:{'Loss/classification_loss': 0.16276582,
'Loss/localization_loss': 0.14533728,
'Loss/regularization_loss': 0.76196104,
'Loss/total_loss': 1.0700641,
'learning_rate': 0.02799985}
I1022 06:34:29.808641 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.16276582,
'Loss/localization_loss': 0.14533728,
'Loss/regularization_loss': 0.76196104,
'Loss/total_loss': 1.0700641,
'learning_rate': 0.02799985}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1200 per-step time 0.430s
I1022 06:35:12.795194 139983244863360 model_lib_v2.py:707] Step 1200 per-step time 0.430s
INFO:tensorflow:{'Loss/classification_loss': 0.15176283,
'Loss/localization_loss': 0.09070753,
'Loss/regularization_loss': 0.7603698,
'Loss/total_loss': 1.0028402,
'learning_rate': 0.0293332}
I1022 06:35:12.795558 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.15176283,
'Loss/localization_loss': 0.09070753,
'Loss/regularization_loss': 0.7603698,
'Loss/total_loss': 1.0028402,
'learning_rate': 0.0293332}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1300 per-step time 0.472s
I1022 06:36:00.001249 139983244863360 model_lib_v2.py:707] Step 1300 per-step time 0.472s
INFO:tensorflow:{'Loss/classification_loss': 0.14497367,
'Loss/localization_loss': 0.10458946,
'Loss/regularization_loss': 0.75871265,
'Loss/total_loss': 1.0082757,
'learning_rate': 0.03066655}
I1022 06:36:00.001642 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.14497367,
'Loss/localization_loss': 0.10458946,
'Loss/regularization_loss': 0.75871265,
'Loss/total_loss': 1.0082757,
'learning_rate': 0.03066655}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1400 per-step time 0.428s
I1022 06:36:42.838780 139983244863360 model_lib_v2.py:707] Step 1400 per-step time 0.428s
INFO:tensorflow:{'Loss/classification_loss': 0.1584994,
'Loss/localization_loss': 0.11934401,
'Loss/regularization_loss': 0.75697535,
'Loss/total_loss': 1.0348188,
'learning_rate': 0.0319999}
I1022 06:36:42.839371 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.1584994,
'Loss/localization_loss': 0.11934401,
'Loss/regularization_loss': 0.75697535,
'Loss/total_loss': 1.0348188,
'learning_rate': 0.0319999}
INFO:tensorflow:Step 1500 per-step time 0.423s
I1022 06:37:25.130238 139983244863360 model_lib_v2.py:707] Step 1500 per-step time 0.423s
INFO:tensorflow:{'Loss/classification_loss': 0.1793848,
'Loss/localization_loss': 0.104649425,
'Loss/regularization_loss': 0.7551695,
'Loss/total_loss': 1.0392038,
'learning_rate': 0.03333325}
I1022 06:37:25.130620 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.1793848,
'Loss/localization_loss': 0.104649425,
'Loss/regularization_loss': 0.7551695,
'Loss/total_loss': 1.0392038,
'learning_rate': 0.03333325}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1600 per-step time 0.486s
I1022 06:38:13.689241 139983244863360 model_lib_v2.py:707] Step 1600 per-step time 0.486s
INFO:tensorflow:{'Loss/classification_loss': 0.16839702,
'Loss/localization_loss': 0.10930214,
'Loss/regularization_loss': 0.75332004,
'Loss/total_loss': 1.0310192,
'learning_rate': 0.034666598}
I1022 06:38:13.689643 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.16839702,
'Loss/localization_loss': 0.10930214,
'Loss/regularization_loss': 0.75332004,
'Loss/total_loss': 1.0310192,
'learning_rate': 0.034666598}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1700 per-step time 0.431s
I1022 06:38:56.811244 139983244863360 model_lib_v2.py:707] Step 1700 per-step time 0.431s
INFO:tensorflow:{'Loss/classification_loss': 0.15905431,
'Loss/localization_loss': 0.121479675,
'Loss/regularization_loss': 0.75141996,
'Loss/total_loss': 1.0319539,
'learning_rate': 0.03599995}
I1022 06:38:56.811614 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.15905431,
'Loss/localization_loss': 0.121479675,
'Loss/regularization_loss': 0.75141996,
'Loss/total_loss': 1.0319539,
'learning_rate': 0.03599995}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1800 per-step time 0.429s
I1022 06:39:39.745138 139983244863360 model_lib_v2.py:707] Step 1800 per-step time 0.429s
INFO:tensorflow:{'Loss/classification_loss': 0.14537309,
'Loss/localization_loss': 0.09904315,
'Loss/regularization_loss': 0.74940014,
'Loss/total_loss': 0.9938164,
'learning_rate': 0.037333302}
I1022 06:39:39.745619 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.14537309,
'Loss/localization_loss': 0.09904315,
'Loss/regularization_loss': 0.74940014,
'Loss/total_loss': 0.9938164,
'learning_rate': 0.037333302}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1900 per-step time 0.486s
I1022 06:40:28.411999 139983244863360 model_lib_v2.py:707] Step 1900 per-step time 0.486s
INFO:tensorflow:{'Loss/classification_loss': 0.13884847,
'Loss/localization_loss': 0.083029695,
'Loss/regularization_loss': 0.7473349,
'Loss/total_loss': 0.96921307,
'learning_rate': 0.03866665}
I1022 06:40:28.412399 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.13884847,
'Loss/localization_loss': 0.083029695,
'Loss/regularization_loss': 0.7473349,
'Loss/total_loss': 0.96921307,
'learning_rate': 0.03866665}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2000 per-step time 0.435s
I1022 06:41:11.930493 139983244863360 model_lib_v2.py:707] Step 2000 per-step time 0.435s
INFO:tensorflow:{'Loss/classification_loss': 0.1522773,
'Loss/localization_loss': 0.12129066,
'Loss/regularization_loss': 0.7452603,
'Loss/total_loss': 1.0188283,
'learning_rate': 0.04}
I1022 06:41:11.930925 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.1522773,
'Loss/localization_loss': 0.12129066,
'Loss/regularization_loss': 0.7452603,
'Loss/total_loss': 1.0188283,
'learning_rate': 0.04}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2100 per-step time 0.469s
I1022 06:41:58.879983 139983244863360 model_lib_v2.py:707] Step 2100 per-step time 0.469s
INFO:tensorflow:{'Loss/classification_loss': 0.13670292,
'Loss/localization_loss': 0.07557889,
'Loss/regularization_loss': 0.7430951,
'Loss/total_loss': 0.9553769,
'learning_rate': 0.039998136}
I1022 06:41:58.880411 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.13670292,
'Loss/localization_loss': 0.07557889,
'Loss/regularization_loss': 0.7430951,
'Loss/total_loss': 0.9553769,
'learning_rate': 0.039998136}
INFO:tensorflow:Step 2200 per-step time 0.446s
I1022 06:42:43.420612 139983244863360 model_lib_v2.py:707] Step 2200 per-step time 0.446s
INFO:tensorflow:{'Loss/classification_loss': 0.14172058,
'Loss/localization_loss': 0.11927483,
'Loss/regularization_loss': 0.74091345,
'Loss/total_loss': 1.0019089,
'learning_rate': 0.039992537}
I1022 06:42:43.421042 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.14172058,
'Loss/localization_loss': 0.11927483,
'Loss/regularization_loss': 0.74091345,
'Loss/total_loss': 1.0019089,
'learning_rate': 0.039992537}
INFO:tensorflow:Step 2300 per-step time 0.445s
I1022 06:43:27.929881 139983244863360 model_lib_v2.py:707] Step 2300 per-step time 0.445s
INFO:tensorflow:{'Loss/classification_loss': 0.14164193,
'Loss/localization_loss': 0.08659148,
'Loss/regularization_loss': 0.7387465,
'Loss/total_loss': 0.9669799,
'learning_rate': 0.03998321}
I1022 06:43:27.930318 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.14164193,
'Loss/localization_loss': 0.08659148,
'Loss/regularization_loss': 0.7387465,
'Loss/total_loss': 0.9669799,
'learning_rate': 0.03998321}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2400 per-step time 0.470s
I1022 06:44:14.881204 139983244863360 model_lib_v2.py:707] Step 2400 per-step time 0.470s
INFO:tensorflow:{'Loss/classification_loss': 0.12687756,
'Loss/localization_loss': 0.08377945,
'Loss/regularization_loss': 0.736587,
'Loss/total_loss': 0.947244,
'learning_rate': 0.039970152}
I1022 06:44:14.881672 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.12687756,
'Loss/localization_loss': 0.08377945,
'Loss/regularization_loss': 0.736587,
'Loss/total_loss': 0.947244,
'learning_rate': 0.039970152}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2500 per-step time 0.440s
I1022 06:44:58.900156 139983244863360 model_lib_v2.py:707] Step 2500 per-step time 0.440s
INFO:tensorflow:{'Loss/classification_loss': 0.13478914,
'Loss/localization_loss': 0.08290454,
'Loss/regularization_loss': 0.7344188,
'Loss/total_loss': 0.9521125,
'learning_rate': 0.039953373}
I1022 06:44:58.900544 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.13478914,
'Loss/localization_loss': 0.08290454,
'Loss/regularization_loss': 0.7344188,
'Loss/total_loss': 0.9521125,
'learning_rate': 0.039953373}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2600 per-step time 0.446s
I1022 06:45:43.467185 139983244863360 model_lib_v2.py:707] Step 2600 per-step time 0.446s
INFO:tensorflow:{'Loss/classification_loss': 0.120550595,
'Loss/localization_loss': 0.059776545,
'Loss/regularization_loss': 0.7322546,
'Loss/total_loss': 0.9125818,
'learning_rate': 0.03993287}
I1022 06:45:43.467561 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.120550595,
'Loss/localization_loss': 0.059776545,
'Loss/regularization_loss': 0.7322546,
'Loss/total_loss': 0.9125818,
'learning_rate': 0.03993287}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2700 per-step time 0.466s
I1022 06:46:30.045598 139983244863360 model_lib_v2.py:707] Step 2700 per-step time 0.466s
INFO:tensorflow:{'Loss/classification_loss': 0.12852073,
'Loss/localization_loss': 0.06829482,
'Loss/regularization_loss': 0.73012453,
'Loss/total_loss': 0.9269401,
'learning_rate': 0.039908648}
I1022 06:46:30.045990 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.12852073,
'Loss/localization_loss': 0.06829482,
'Loss/regularization_loss': 0.73012453,
'Loss/total_loss': 0.9269401,
'learning_rate': 0.039908648}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2800 per-step time 0.448s
I1022 06:47:14.880281 139983244863360 model_lib_v2.py:707] Step 2800 per-step time 0.448s
INFO:tensorflow:{'Loss/classification_loss': 0.09725833,
'Loss/localization_loss': 0.05355456,
'Loss/regularization_loss': 0.72797996,
'Loss/total_loss': 0.8787929,
'learning_rate': 0.039880715}
I1022 06:47:14.883991 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.09725833,
'Loss/localization_loss': 0.05355456,
'Loss/regularization_loss': 0.72797996,
'Loss/total_loss': 0.8787929,
'learning_rate': 0.039880715}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2900 per-step time 0.434s
I1022 06:47:58.274571 139983244863360 model_lib_v2.py:707] Step 2900 per-step time 0.434s
INFO:tensorflow:{'Loss/classification_loss': 0.116147414,
'Loss/localization_loss': 0.07315055,
'Loss/regularization_loss': 0.72583556,
'Loss/total_loss': 0.91513354,
'learning_rate': 0.039849065}
I1022 06:47:58.275031 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.116147414,
'Loss/localization_loss': 0.07315055,
'Loss/regularization_loss': 0.72583556,
'Loss/total_loss': 0.91513354,
'learning_rate': 0.039849065}
INFO:tensorflow:Step 3000 per-step time 0.487s
I1022 06:48:46.972093 139983244863360 model_lib_v2.py:707] Step 3000 per-step time 0.487s
INFO:tensorflow:{'Loss/classification_loss': 0.116282046,
'Loss/localization_loss': 0.07361136,
'Loss/regularization_loss': 0.7237084,
'Loss/total_loss': 0.91360176,
'learning_rate': 0.03981372}
I1022 06:48:46.972539 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.116282046,
'Loss/localization_loss': 0.07361136,
'Loss/regularization_loss': 0.7237084,
'Loss/total_loss': 0.91360176,
'learning_rate': 0.03981372}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3100 per-step time 0.429s
I1022 06:49:29.905863 139983244863360 model_lib_v2.py:707] Step 3100 per-step time 0.429s
INFO:tensorflow:{'Loss/classification_loss': 0.12863551,
'Loss/localization_loss': 0.07450701,
'Loss/regularization_loss': 0.721611,
'Loss/total_loss': 0.92475355,
'learning_rate': 0.03977467}
I1022 06:49:29.906370 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.12863551,
'Loss/localization_loss': 0.07450701,
'Loss/regularization_loss': 0.721611,
'Loss/total_loss': 0.92475355,
'learning_rate': 0.03977467}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3200 per-step time 0.438s
I1022 06:50:13.752479 139983244863360 model_lib_v2.py:707] Step 3200 per-step time 0.438s
INFO:tensorflow:{'Loss/classification_loss': 0.10783537,
'Loss/localization_loss': 0.06788144,
'Loss/regularization_loss': 0.7194789,
'Loss/total_loss': 0.8951957,
'learning_rate': 0.03973194}
I1022 06:50:13.752980 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.10783537,
'Loss/localization_loss': 0.06788144,
'Loss/regularization_loss': 0.7194789,
'Loss/total_loss': 0.8951957,
'learning_rate': 0.03973194}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3300 per-step time 0.476s
I1022 06:51:01.337773 139983244863360 model_lib_v2.py:707] Step 3300 per-step time 0.476s
INFO:tensorflow:{'Loss/classification_loss': 0.08763507,
'Loss/localization_loss': 0.047675833,
'Loss/regularization_loss': 0.7173634,
'Loss/total_loss': 0.8526743,
'learning_rate': 0.03968552}
I1022 06:51:01.338141 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.08763507,
'Loss/localization_loss': 0.047675833,
'Loss/regularization_loss': 0.7173634,
'Loss/total_loss': 0.8526743,
'learning_rate': 0.03968552}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3400 per-step time 0.432s
I1022 06:51:44.527204 139983244863360 model_lib_v2.py:707] Step 3400 per-step time 0.432s
INFO:tensorflow:{'Loss/classification_loss': 0.15553851,
'Loss/localization_loss': 0.0994157,
'Loss/regularization_loss': 0.7152392,
'Loss/total_loss': 0.97019345,
'learning_rate': 0.039635435}
I1022 06:51:44.527571 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.15553851,
'Loss/localization_loss': 0.0994157,
'Loss/regularization_loss': 0.7152392,
'Loss/total_loss': 0.97019345,
'learning_rate': 0.039635435}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3500 per-step time 0.426s
I1022 06:52:27.177964 139983244863360 model_lib_v2.py:707] Step 3500 per-step time 0.426s
INFO:tensorflow:{'Loss/classification_loss': 0.10493549,
'Loss/localization_loss': 0.045261204,
'Loss/regularization_loss': 0.71311337,
'Loss/total_loss': 0.8633101,
'learning_rate': 0.03958168}
I1022 06:52:27.178364 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.10493549,
'Loss/localization_loss': 0.045261204,
'Loss/regularization_loss': 0.71311337,
'Loss/total_loss': 0.8633101,
'learning_rate': 0.03958168}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3600 per-step time 0.490s
I1022 06:53:16.167447 139983244863360 model_lib_v2.py:707] Step 3600 per-step time 0.490s
INFO:tensorflow:{'Loss/classification_loss': 0.1273449,
'Loss/localization_loss': 0.06699954,
'Loss/regularization_loss': 0.7110204,
'Loss/total_loss': 0.9053649,
'learning_rate': 0.039524276}
I1022 06:53:16.167791 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.1273449,
'Loss/localization_loss': 0.06699954,
'Loss/regularization_loss': 0.7110204,
'Loss/total_loss': 0.9053649,
'learning_rate': 0.039524276}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3700 per-step time 0.426s
I1022 06:53:58.783817 139983244863360 model_lib_v2.py:707] Step 3700 per-step time 0.426s
INFO:tensorflow:{'Loss/classification_loss': 0.1205756,
'Loss/localization_loss': 0.07273918,
'Loss/regularization_loss': 0.7089273,
'Loss/total_loss': 0.90224206,
'learning_rate': 0.03946323}
I1022 06:53:58.784204 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.1205756,
'Loss/localization_loss': 0.07273918,
'Loss/regularization_loss': 0.7089273,
'Loss/total_loss': 0.90224206,
'learning_rate': 0.03946323}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3800 per-step time 0.454s
I1022 06:54:44.172081 139983244863360 model_lib_v2.py:707] Step 3800 per-step time 0.454s
INFO:tensorflow:{'Loss/classification_loss': 0.11082064,
'Loss/localization_loss': 0.05475621,
'Loss/regularization_loss': 0.7068544,
'Loss/total_loss': 0.8724313,
'learning_rate': 0.039398547}
I1022 06:54:44.172489 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.11082064,
'Loss/localization_loss': 0.05475621,
'Loss/regularization_loss': 0.7068544,
'Loss/total_loss': 0.8724313,
'learning_rate': 0.039398547}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3900 per-step time 0.458s
I1022 06:55:29.951050 139983244863360 model_lib_v2.py:707] Step 3900 per-step time 0.458s
INFO:tensorflow:{'Loss/classification_loss': 0.116561666,
'Loss/localization_loss': 0.049366444,
'Loss/regularization_loss': 0.7047878,
'Loss/total_loss': 0.8707159,
'learning_rate': 0.039330248}
I1022 06:55:29.951517 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.116561666,
'Loss/localization_loss': 0.049366444,
'Loss/regularization_loss': 0.7047878,
'Loss/total_loss': 0.8707159,
'learning_rate': 0.039330248}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4000 per-step time 0.430s
I1022 06:56:12.946711 139983244863360 model_lib_v2.py:707] Step 4000 per-step time 0.430s
INFO:tensorflow:{'Loss/classification_loss': 0.10756644,
'Loss/localization_loss': 0.07001631,
'Loss/regularization_loss': 0.7027153,
'Loss/total_loss': 0.880298,
'learning_rate': 0.039258346}
I1022 06:56:12.947188 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.10756644,
'Loss/localization_loss': 0.07001631,
'Loss/regularization_loss': 0.7027153,
'Loss/total_loss': 0.880298,
'learning_rate': 0.039258346}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4100 per-step time 0.494s
I1022 06:57:02.398619 139983244863360 model_lib_v2.py:707] Step 4100 per-step time 0.494s
INFO:tensorflow:{'Loss/classification_loss': 0.100549646,
'Loss/localization_loss': 0.059607156,
'Loss/regularization_loss': 0.70064926,
'Loss/total_loss': 0.86080605,
'learning_rate': 0.03918285}
I1022 06:57:02.398985 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.100549646,
'Loss/localization_loss': 0.059607156,
'Loss/regularization_loss': 0.70064926,
'Loss/total_loss': 0.86080605,
'learning_rate': 0.03918285}
INFO:tensorflow:Step 4200 per-step time 0.434s
I1022 06:57:45.747141 139983244863360 model_lib_v2.py:707] Step 4200 per-step time 0.434s
INFO:tensorflow:{'Loss/classification_loss': 0.09146054,
'Loss/localization_loss': 0.05623886,
'Loss/regularization_loss': 0.6985878,
'Loss/total_loss': 0.8462872,
'learning_rate': 0.03910377}
I1022 06:57:45.747494 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.09146054,
'Loss/localization_loss': 0.05623886,
'Loss/regularization_loss': 0.6985878,
'Loss/total_loss': 0.8462872,
'learning_rate': 0.03910377}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4300 per-step time 0.436s
I1022 06:58:29.392238 139983244863360 model_lib_v2.py:707] Step 4300 per-step time 0.436s
INFO:tensorflow:{'Loss/classification_loss': 0.11608726,
'Loss/localization_loss': 0.06350406,
'Loss/regularization_loss': 0.69655186,
'Loss/total_loss': 0.8761432,
'learning_rate': 0.039021127}
I1022 06:58:29.392648 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.11608726,
'Loss/localization_loss': 0.06350406,
'Loss/regularization_loss': 0.69655186,
'Loss/total_loss': 0.8761432,
'learning_rate': 0.039021127}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4400 per-step time 0.471s
I1022 06:59:16.488514 139983244863360 model_lib_v2.py:707] Step 4400 per-step time 0.471s
INFO:tensorflow:{'Loss/classification_loss': 0.10190118,
'Loss/localization_loss': 0.057971317,
'Loss/regularization_loss': 0.69451994,
'Loss/total_loss': 0.8543924,
'learning_rate': 0.03893494}
I1022 06:59:16.488843 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.10190118,
'Loss/localization_loss': 0.057971317,
'Loss/regularization_loss': 0.69451994,
'Loss/total_loss': 0.8543924,
'learning_rate': 0.03893494}
INFO:tensorflow:Step 4500 per-step time 0.441s
I1022 07:00:00.557350 139983244863360 model_lib_v2.py:707] Step 4500 per-step time 0.441s
INFO:tensorflow:{'Loss/classification_loss': 0.086393125,
'Loss/localization_loss': 0.04026761,
'Loss/regularization_loss': 0.69250077,
'Loss/total_loss': 0.81916153,
'learning_rate': 0.03884522}
I1022 07:00:00.557847 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.086393125,
'Loss/localization_loss': 0.04026761,
'Loss/regularization_loss': 0.69250077,
'Loss/total_loss': 0.81916153,
'learning_rate': 0.03884522}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4600 per-step time 0.426s
I1022 07:00:43.123352 139983244863360 model_lib_v2.py:707] Step 4600 per-step time 0.426s
INFO:tensorflow:{'Loss/classification_loss': 0.09288203,
'Loss/localization_loss': 0.040780406,
'Loss/regularization_loss': 0.69048077,
'Loss/total_loss': 0.8241432,
'learning_rate': 0.03875198}
I1022 07:00:43.123769 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.09288203,
'Loss/localization_loss': 0.040780406,
'Loss/regularization_loss': 0.69048077,
'Loss/total_loss': 0.8241432,
'learning_rate': 0.03875198}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4700 per-step time 0.470s
I1022 07:01:30.081151 139983244863360 model_lib_v2.py:707] Step 4700 per-step time 0.470s
INFO:tensorflow:{'Loss/classification_loss': 0.095925935,
'Loss/localization_loss': 0.04789356,
'Loss/regularization_loss': 0.6884787,
'Loss/total_loss': 0.8322982,
'learning_rate': 0.038655244}
I1022 07:01:30.081559 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.095925935,
'Loss/localization_loss': 0.04789356,
'Loss/regularization_loss': 0.6884787,
'Loss/total_loss': 0.8322982,
'learning_rate': 0.038655244}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4800 per-step time 0.428s
I1022 07:02:12.903995 139983244863360 model_lib_v2.py:707] Step 4800 per-step time 0.428s
INFO:tensorflow:{'Loss/classification_loss': 0.089835934,
'Loss/localization_loss': 0.04991635,
'Loss/regularization_loss': 0.68648946,
'Loss/total_loss': 0.82624173,
'learning_rate': 0.038555026}
I1022 07:02:12.904500 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.089835934,
'Loss/localization_loss': 0.04991635,
'Loss/regularization_loss': 0.68648946,
'Loss/total_loss': 0.82624173,
'learning_rate': 0.038555026}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4900 per-step time 0.438s
I1022 07:02:56.749983 139983244863360 model_lib_v2.py:707] Step 4900 per-step time 0.438s
INFO:tensorflow:{'Loss/classification_loss': 0.08984785,
'Loss/localization_loss': 0.046125717,
'Loss/regularization_loss': 0.6844983,
'Loss/total_loss': 0.8204719,
'learning_rate': 0.038451348}
I1022 07:02:56.750356 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.08984785,
'Loss/localization_loss': 0.046125717,
'Loss/regularization_loss': 0.6844983,
'Loss/total_loss': 0.8204719,
'learning_rate': 0.038451348}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 5000 per-step time 0.485s
I1022 07:03:45.252465 139983244863360 model_lib_v2.py:707] Step 5000 per-step time 0.485s
INFO:tensorflow:{'Loss/classification_loss': 0.08142167,
'Loss/localization_loss': 0.040387154,
'Loss/regularization_loss': 0.6825257,
'Loss/total_loss': 0.8043345,
'learning_rate': 0.038344227}
I1022 07:03:45.252819 139983244863360 model_lib_v2.py:708] {'Loss/classification_loss': 0.08142167,
'Loss/localization_loss': 0.040387154,
'Loss/regularization_loss': 0.6825257,
'Loss/total_loss': 0.8043345,
'learning_rate': 0.038344227}
!{command2}#training model ssd_restnet_v1
2022-10-22 07:04:17.332253: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 07:04:18.263851: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 07:04:18.264008: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 07:04:18.264029: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-10-22 07:04:23.982585: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I1022 07:04:24.025751 139879681050496 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 5000
I1022 07:04:24.033072 139879681050496 config_util.py:552] Maybe overwriting train_steps: 5000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 07:04:24.033248 139879681050496 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W1022 07:04:24.061027 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
I1022 07:04:24.067933 139879681050496 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
I1022 07:04:24.068121 139879681050496 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 07:04:24.068212 139879681050496 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 07:04:24.068280 139879681050496 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 07:04:24.074250 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 07:04:24.089864 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 07:04:30.967114 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W1022 07:04:33.890393 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 07:04:35.545909 139879681050496 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.196359 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.199472 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.202341 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.203470 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.206205 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.207223 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.210849 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.211868 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.213502 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 07:05:14.214611 139879681050496 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
W1022 07:05:16.211919 139875231704832 deprecation.py:560] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 100 per-step time 1.385s
I1022 07:07:34.471890 139879681050496 model_lib_v2.py:707] Step 100 per-step time 1.385s
INFO:tensorflow:{'Loss/classification_loss': 0.3157732,
'Loss/localization_loss': 0.30135825,
'Loss/regularization_loss': 0.27805358,
'Loss/total_loss': 0.89518505,
'learning_rate': 0.014666351}
I1022 07:07:34.472407 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.3157732,
'Loss/localization_loss': 0.30135825,
'Loss/regularization_loss': 0.27805358,
'Loss/total_loss': 0.89518505,
'learning_rate': 0.014666351}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 200 per-step time 0.833s
I1022 07:08:57.659145 139879681050496 model_lib_v2.py:707] Step 200 per-step time 0.833s
INFO:tensorflow:{'Loss/classification_loss': 0.2087822,
'Loss/localization_loss': 0.20839468,
'Loss/regularization_loss': 0.27701524,
'Loss/total_loss': 0.6941921,
'learning_rate': 0.0159997}
I1022 07:08:57.659488 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.2087822,
'Loss/localization_loss': 0.20839468,
'Loss/regularization_loss': 0.27701524,
'Loss/total_loss': 0.6941921,
'learning_rate': 0.0159997}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 300 per-step time 0.829s
I1022 07:10:20.599451 139879681050496 model_lib_v2.py:707] Step 300 per-step time 0.829s
INFO:tensorflow:{'Loss/classification_loss': 0.28806937,
'Loss/localization_loss': 0.22236058,
'Loss/regularization_loss': 0.27536762,
'Loss/total_loss': 0.7857976,
'learning_rate': 0.01733305}
I1022 07:10:20.599795 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.28806937,
'Loss/localization_loss': 0.22236058,
'Loss/regularization_loss': 0.27536762,
'Loss/total_loss': 0.7857976,
'learning_rate': 0.01733305}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 400 per-step time 0.831s
I1022 07:11:43.759492 139879681050496 model_lib_v2.py:707] Step 400 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.17753309,
'Loss/localization_loss': 0.14328834,
'Loss/regularization_loss': 0.27344555,
'Loss/total_loss': 0.594267,
'learning_rate': 0.0186664}
I1022 07:11:43.759928 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.17753309,
'Loss/localization_loss': 0.14328834,
'Loss/regularization_loss': 0.27344555,
'Loss/total_loss': 0.594267,
'learning_rate': 0.0186664}
INFO:tensorflow:Step 500 per-step time 0.832s
I1022 07:13:06.954212 139879681050496 model_lib_v2.py:707] Step 500 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.21625274,
'Loss/localization_loss': 0.1832614,
'Loss/regularization_loss': 0.27383748,
'Loss/total_loss': 0.67335165,
'learning_rate': 0.01999975}
I1022 07:13:06.954608 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.21625274,
'Loss/localization_loss': 0.1832614,
'Loss/regularization_loss': 0.27383748,
'Loss/total_loss': 0.67335165,
'learning_rate': 0.01999975}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 600 per-step time 0.830s
I1022 07:14:29.979844 139879681050496 model_lib_v2.py:707] Step 600 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.16618262,
'Loss/localization_loss': 0.13664268,
'Loss/regularization_loss': 0.27192494,
'Loss/total_loss': 0.57475024,
'learning_rate': 0.0213331}
I1022 07:14:29.980180 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.16618262,
'Loss/localization_loss': 0.13664268,
'Loss/regularization_loss': 0.27192494,
'Loss/total_loss': 0.57475024,
'learning_rate': 0.0213331}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 700 per-step time 0.832s
I1022 07:15:53.139160 139879681050496 model_lib_v2.py:707] Step 700 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.18129674,
'Loss/localization_loss': 0.1371799,
'Loss/regularization_loss': 0.26978797,
'Loss/total_loss': 0.5882646,
'learning_rate': 0.02266645}
I1022 07:15:53.139486 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.18129674,
'Loss/localization_loss': 0.1371799,
'Loss/regularization_loss': 0.26978797,
'Loss/total_loss': 0.5882646,
'learning_rate': 0.02266645}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 800 per-step time 0.830s
I1022 07:17:16.182771 139879681050496 model_lib_v2.py:707] Step 800 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.19324842,
'Loss/localization_loss': 0.15791191,
'Loss/regularization_loss': 0.268052,
'Loss/total_loss': 0.6192124,
'learning_rate': 0.023999799}
I1022 07:17:16.183095 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.19324842,
'Loss/localization_loss': 0.15791191,
'Loss/regularization_loss': 0.268052,
'Loss/total_loss': 0.6192124,
'learning_rate': 0.023999799}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 900 per-step time 0.831s
I1022 07:18:39.303314 139879681050496 model_lib_v2.py:707] Step 900 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.23939312,
'Loss/localization_loss': 0.23121594,
'Loss/regularization_loss': 0.2676042,
'Loss/total_loss': 0.7382133,
'learning_rate': 0.025333151}
I1022 07:18:39.303737 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.23939312,
'Loss/localization_loss': 0.23121594,
'Loss/regularization_loss': 0.2676042,
'Loss/total_loss': 0.7382133,
'learning_rate': 0.025333151}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1000 per-step time 0.830s
I1022 07:20:02.344655 139879681050496 model_lib_v2.py:707] Step 1000 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.2004517,
'Loss/localization_loss': 0.11527681,
'Loss/regularization_loss': 0.26793253,
'Loss/total_loss': 0.5836611,
'learning_rate': 0.0266665}
I1022 07:20:02.344973 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.2004517,
'Loss/localization_loss': 0.11527681,
'Loss/regularization_loss': 0.26793253,
'Loss/total_loss': 0.5836611,
'learning_rate': 0.0266665}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1100 per-step time 0.862s
I1022 07:21:28.520849 139879681050496 model_lib_v2.py:707] Step 1100 per-step time 0.862s
INFO:tensorflow:{'Loss/classification_loss': 0.15959395,
'Loss/localization_loss': 0.16027746,
'Loss/regularization_loss': 0.26739478,
'Loss/total_loss': 0.5872662,
'learning_rate': 0.02799985}
I1022 07:21:28.521202 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.15959395,
'Loss/localization_loss': 0.16027746,
'Loss/regularization_loss': 0.26739478,
'Loss/total_loss': 0.5872662,
'learning_rate': 0.02799985}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1200 per-step time 0.829s
I1022 07:22:51.447167 139879681050496 model_lib_v2.py:707] Step 1200 per-step time 0.829s
INFO:tensorflow:{'Loss/classification_loss': 0.1549485,
'Loss/localization_loss': 0.13036534,
'Loss/regularization_loss': 0.26605624,
'Loss/total_loss': 0.5513701,
'learning_rate': 0.0293332}
I1022 07:22:51.447513 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1549485,
'Loss/localization_loss': 0.13036534,
'Loss/regularization_loss': 0.26605624,
'Loss/total_loss': 0.5513701,
'learning_rate': 0.0293332}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1300 per-step time 0.831s
I1022 07:24:14.556977 139879681050496 model_lib_v2.py:707] Step 1300 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.16347705,
'Loss/localization_loss': 0.10908871,
'Loss/regularization_loss': 0.26429966,
'Loss/total_loss': 0.5368654,
'learning_rate': 0.03066655}
I1022 07:24:14.557298 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.16347705,
'Loss/localization_loss': 0.10908871,
'Loss/regularization_loss': 0.26429966,
'Loss/total_loss': 0.5368654,
'learning_rate': 0.03066655}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1400 per-step time 0.831s
I1022 07:25:37.666053 139879681050496 model_lib_v2.py:707] Step 1400 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.16279468,
'Loss/localization_loss': 0.096227005,
'Loss/regularization_loss': 0.26341712,
'Loss/total_loss': 0.5224388,
'learning_rate': 0.0319999}
I1022 07:25:37.666362 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.16279468,
'Loss/localization_loss': 0.096227005,
'Loss/regularization_loss': 0.26341712,
'Loss/total_loss': 0.5224388,
'learning_rate': 0.0319999}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1500 per-step time 0.831s
I1022 07:27:00.753805 139879681050496 model_lib_v2.py:707] Step 1500 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.1716927,
'Loss/localization_loss': 0.14015995,
'Loss/regularization_loss': 0.26658726,
'Loss/total_loss': 0.5784399,
'learning_rate': 0.03333325}
I1022 07:27:00.754864 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1716927,
'Loss/localization_loss': 0.14015995,
'Loss/regularization_loss': 0.26658726,
'Loss/total_loss': 0.5784399,
'learning_rate': 0.03333325}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1600 per-step time 0.832s
I1022 07:28:23.896856 139879681050496 model_lib_v2.py:707] Step 1600 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.2698514,
'Loss/localization_loss': 0.26072007,
'Loss/regularization_loss': 0.26700962,
'Loss/total_loss': 0.7975811,
'learning_rate': 0.034666598}
I1022 07:28:23.897217 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.2698514,
'Loss/localization_loss': 0.26072007,
'Loss/regularization_loss': 0.26700962,
'Loss/total_loss': 0.7975811,
'learning_rate': 0.034666598}
INFO:tensorflow:Step 1700 per-step time 0.831s
I1022 07:29:46.979102 139879681050496 model_lib_v2.py:707] Step 1700 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.2224734,
'Loss/localization_loss': 0.17889918,
'Loss/regularization_loss': 0.2745276,
'Loss/total_loss': 0.6759002,
'learning_rate': 0.03599995}
I1022 07:29:46.979444 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.2224734,
'Loss/localization_loss': 0.17889918,
'Loss/regularization_loss': 0.2745276,
'Loss/total_loss': 0.6759002,
'learning_rate': 0.03599995}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1800 per-step time 0.829s
I1022 07:31:09.913247 139879681050496 model_lib_v2.py:707] Step 1800 per-step time 0.829s
INFO:tensorflow:{'Loss/classification_loss': 0.17841947,
'Loss/localization_loss': 0.1304848,
'Loss/regularization_loss': 0.27717006,
'Loss/total_loss': 0.58607435,
'learning_rate': 0.037333302}
I1022 07:31:09.913605 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.17841947,
'Loss/localization_loss': 0.1304848,
'Loss/regularization_loss': 0.27717006,
'Loss/total_loss': 0.58607435,
'learning_rate': 0.037333302}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1900 per-step time 0.831s
I1022 07:32:32.980927 139879681050496 model_lib_v2.py:707] Step 1900 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.1605377,
'Loss/localization_loss': 0.10349086,
'Loss/regularization_loss': 0.27544236,
'Loss/total_loss': 0.5394709,
'learning_rate': 0.03866665}
I1022 07:32:32.981257 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1605377,
'Loss/localization_loss': 0.10349086,
'Loss/regularization_loss': 0.27544236,
'Loss/total_loss': 0.5394709,
'learning_rate': 0.03866665}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2000 per-step time 0.831s
I1022 07:33:56.050264 139879681050496 model_lib_v2.py:707] Step 2000 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.1504905,
'Loss/localization_loss': 0.114831515,
'Loss/regularization_loss': 0.27189884,
'Loss/total_loss': 0.53722084,
'learning_rate': 0.04}
I1022 07:33:56.050655 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1504905,
'Loss/localization_loss': 0.114831515,
'Loss/regularization_loss': 0.27189884,
'Loss/total_loss': 0.53722084,
'learning_rate': 0.04}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2100 per-step time 0.862s
I1022 07:35:22.291359 139879681050496 model_lib_v2.py:707] Step 2100 per-step time 0.862s
INFO:tensorflow:{'Loss/classification_loss': 0.12632363,
'Loss/localization_loss': 0.08182167,
'Loss/regularization_loss': 0.26994088,
'Loss/total_loss': 0.47808617,
'learning_rate': 0.039998136}
I1022 07:35:22.291698 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.12632363,
'Loss/localization_loss': 0.08182167,
'Loss/regularization_loss': 0.26994088,
'Loss/total_loss': 0.47808617,
'learning_rate': 0.039998136}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2200 per-step time 0.829s
I1022 07:36:45.178072 139879681050496 model_lib_v2.py:707] Step 2200 per-step time 0.829s
INFO:tensorflow:{'Loss/classification_loss': 0.16678591,
'Loss/localization_loss': 0.123520724,
'Loss/regularization_loss': 0.26679808,
'Loss/total_loss': 0.5571047,
'learning_rate': 0.039992537}
I1022 07:36:45.178420 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.16678591,
'Loss/localization_loss': 0.123520724,
'Loss/regularization_loss': 0.26679808,
'Loss/total_loss': 0.5571047,
'learning_rate': 0.039992537}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2300 per-step time 0.832s
I1022 07:38:08.371856 139879681050496 model_lib_v2.py:707] Step 2300 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.16662152,
'Loss/localization_loss': 0.11734111,
'Loss/regularization_loss': 0.2662069,
'Loss/total_loss': 0.5501695,
'learning_rate': 0.03998321}
I1022 07:38:08.372262 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.16662152,
'Loss/localization_loss': 0.11734111,
'Loss/regularization_loss': 0.2662069,
'Loss/total_loss': 0.5501695,
'learning_rate': 0.03998321}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2400 per-step time 0.834s
I1022 07:39:31.712033 139879681050496 model_lib_v2.py:707] Step 2400 per-step time 0.834s
INFO:tensorflow:{'Loss/classification_loss': 0.14260265,
'Loss/localization_loss': 0.07625416,
'Loss/regularization_loss': 0.26332912,
'Loss/total_loss': 0.48218593,
'learning_rate': 0.039970152}
I1022 07:39:31.712369 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.14260265,
'Loss/localization_loss': 0.07625416,
'Loss/regularization_loss': 0.26332912,
'Loss/total_loss': 0.48218593,
'learning_rate': 0.039970152}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2500 per-step time 0.832s
I1022 07:40:54.917631 139879681050496 model_lib_v2.py:707] Step 2500 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.1439301,
'Loss/localization_loss': 0.098952666,
'Loss/regularization_loss': 0.25969967,
'Loss/total_loss': 0.50258243,
'learning_rate': 0.039953373}
I1022 07:40:54.917981 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1439301,
'Loss/localization_loss': 0.098952666,
'Loss/regularization_loss': 0.25969967,
'Loss/total_loss': 0.50258243,
'learning_rate': 0.039953373}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2600 per-step time 0.832s
I1022 07:42:18.159490 139879681050496 model_lib_v2.py:707] Step 2600 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.12661192,
'Loss/localization_loss': 0.085133135,
'Loss/regularization_loss': 0.25703755,
'Loss/total_loss': 0.4687826,
'learning_rate': 0.03993287}
I1022 07:42:18.159823 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.12661192,
'Loss/localization_loss': 0.085133135,
'Loss/regularization_loss': 0.25703755,
'Loss/total_loss': 0.4687826,
'learning_rate': 0.03993287}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2700 per-step time 0.832s
I1022 07:43:41.310896 139879681050496 model_lib_v2.py:707] Step 2700 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.11423855,
'Loss/localization_loss': 0.07662379,
'Loss/regularization_loss': 0.25340843,
'Loss/total_loss': 0.4442708,
'learning_rate': 0.039908648}
I1022 07:43:41.311211 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.11423855,
'Loss/localization_loss': 0.07662379,
'Loss/regularization_loss': 0.25340843,
'Loss/total_loss': 0.4442708,
'learning_rate': 0.039908648}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2800 per-step time 0.833s
I1022 07:45:04.613364 139879681050496 model_lib_v2.py:707] Step 2800 per-step time 0.833s
INFO:tensorflow:{'Loss/classification_loss': 0.13516282,
'Loss/localization_loss': 0.07641197,
'Loss/regularization_loss': 0.25060192,
'Loss/total_loss': 0.4621767,
'learning_rate': 0.039880715}
I1022 07:45:04.613721 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.13516282,
'Loss/localization_loss': 0.07641197,
'Loss/regularization_loss': 0.25060192,
'Loss/total_loss': 0.4621767,
'learning_rate': 0.039880715}
INFO:tensorflow:Step 2900 per-step time 0.831s
I1022 07:46:27.664805 139879681050496 model_lib_v2.py:707] Step 2900 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.12650973,
'Loss/localization_loss': 0.07604483,
'Loss/regularization_loss': 0.24819703,
'Loss/total_loss': 0.4507516,
'learning_rate': 0.039849065}
I1022 07:46:27.665149 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.12650973,
'Loss/localization_loss': 0.07604483,
'Loss/regularization_loss': 0.24819703,
'Loss/total_loss': 0.4507516,
'learning_rate': 0.039849065}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3000 per-step time 0.830s
I1022 07:47:50.697926 139879681050496 model_lib_v2.py:707] Step 3000 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.10897509,
'Loss/localization_loss': 0.071293965,
'Loss/regularization_loss': 0.24803473,
'Loss/total_loss': 0.42830378,
'learning_rate': 0.03981372}
I1022 07:47:50.698230 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.10897509,
'Loss/localization_loss': 0.071293965,
'Loss/regularization_loss': 0.24803473,
'Loss/total_loss': 0.42830378,
'learning_rate': 0.03981372}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3100 per-step time 0.864s
I1022 07:49:17.134488 139879681050496 model_lib_v2.py:707] Step 3100 per-step time 0.864s
INFO:tensorflow:{'Loss/classification_loss': 0.12383219,
'Loss/localization_loss': 0.06266217,
'Loss/regularization_loss': 0.24485603,
'Loss/total_loss': 0.43135038,
'learning_rate': 0.03977467}
I1022 07:49:17.134886 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.12383219,
'Loss/localization_loss': 0.06266217,
'Loss/regularization_loss': 0.24485603,
'Loss/total_loss': 0.43135038,
'learning_rate': 0.03977467}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3200 per-step time 0.829s
I1022 07:50:40.062134 139879681050496 model_lib_v2.py:707] Step 3200 per-step time 0.829s
INFO:tensorflow:{'Loss/classification_loss': 0.11662543,
'Loss/localization_loss': 0.059592925,
'Loss/regularization_loss': 0.2425488,
'Loss/total_loss': 0.41876715,
'learning_rate': 0.03973194}
I1022 07:50:40.062471 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.11662543,
'Loss/localization_loss': 0.059592925,
'Loss/regularization_loss': 0.2425488,
'Loss/total_loss': 0.41876715,
'learning_rate': 0.03973194}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3300 per-step time 0.831s
I1022 07:52:03.207960 139879681050496 model_lib_v2.py:707] Step 3300 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.115001336,
'Loss/localization_loss': 0.07273388,
'Loss/regularization_loss': 0.23981078,
'Loss/total_loss': 0.427546,
'learning_rate': 0.03968552}
I1022 07:52:03.208272 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.115001336,
'Loss/localization_loss': 0.07273388,
'Loss/regularization_loss': 0.23981078,
'Loss/total_loss': 0.427546,
'learning_rate': 0.03968552}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3400 per-step time 0.831s
I1022 07:53:26.315414 139879681050496 model_lib_v2.py:707] Step 3400 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.11239237,
'Loss/localization_loss': 0.06790066,
'Loss/regularization_loss': 0.23668198,
'Loss/total_loss': 0.41697502,
'learning_rate': 0.039635435}
I1022 07:53:26.315723 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.11239237,
'Loss/localization_loss': 0.06790066,
'Loss/regularization_loss': 0.23668198,
'Loss/total_loss': 0.41697502,
'learning_rate': 0.039635435}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3500 per-step time 0.831s
I1022 07:54:49.406607 139879681050496 model_lib_v2.py:707] Step 3500 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.14304635,
'Loss/localization_loss': 0.0787322,
'Loss/regularization_loss': 0.23337731,
'Loss/total_loss': 0.45515585,
'learning_rate': 0.03958168}
I1022 07:54:49.406917 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.14304635,
'Loss/localization_loss': 0.0787322,
'Loss/regularization_loss': 0.23337731,
'Loss/total_loss': 0.45515585,
'learning_rate': 0.03958168}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3600 per-step time 0.830s
I1022 07:56:12.361927 139879681050496 model_lib_v2.py:707] Step 3600 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.12121208,
'Loss/localization_loss': 0.07389943,
'Loss/regularization_loss': 0.23367524,
'Loss/total_loss': 0.42878675,
'learning_rate': 0.039524276}
I1022 07:56:12.362236 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.12121208,
'Loss/localization_loss': 0.07389943,
'Loss/regularization_loss': 0.23367524,
'Loss/total_loss': 0.42878675,
'learning_rate': 0.039524276}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3700 per-step time 0.830s
I1022 07:57:35.401765 139879681050496 model_lib_v2.py:707] Step 3700 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.1158313,
'Loss/localization_loss': 0.06560476,
'Loss/regularization_loss': 0.23124285,
'Loss/total_loss': 0.4126789,
'learning_rate': 0.03946323}
I1022 07:57:35.402070 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.1158313,
'Loss/localization_loss': 0.06560476,
'Loss/regularization_loss': 0.23124285,
'Loss/total_loss': 0.4126789,
'learning_rate': 0.03946323}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3800 per-step time 0.830s
I1022 07:58:58.428019 139879681050496 model_lib_v2.py:707] Step 3800 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.09802219,
'Loss/localization_loss': 0.050235525,
'Loss/regularization_loss': 0.22819287,
'Loss/total_loss': 0.3764506,
'learning_rate': 0.039398547}
I1022 07:58:58.428353 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.09802219,
'Loss/localization_loss': 0.050235525,
'Loss/regularization_loss': 0.22819287,
'Loss/total_loss': 0.3764506,
'learning_rate': 0.039398547}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3900 per-step time 0.831s
I1022 08:00:21.554569 139879681050496 model_lib_v2.py:707] Step 3900 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.13177182,
'Loss/localization_loss': 0.08326601,
'Loss/regularization_loss': 0.2265352,
'Loss/total_loss': 0.44157302,
'learning_rate': 0.039330248}
I1022 08:00:21.554961 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.13177182,
'Loss/localization_loss': 0.08326601,
'Loss/regularization_loss': 0.2265352,
'Loss/total_loss': 0.44157302,
'learning_rate': 0.039330248}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4000 per-step time 0.830s
I1022 08:01:44.580282 139879681050496 model_lib_v2.py:707] Step 4000 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.14484939,
'Loss/localization_loss': 0.08458305,
'Loss/regularization_loss': 0.22519018,
'Loss/total_loss': 0.45462263,
'learning_rate': 0.039258346}
I1022 08:01:44.580631 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.14484939,
'Loss/localization_loss': 0.08458305,
'Loss/regularization_loss': 0.22519018,
'Loss/total_loss': 0.45462263,
'learning_rate': 0.039258346}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4100 per-step time 0.868s
I1022 08:03:11.396462 139879681050496 model_lib_v2.py:707] Step 4100 per-step time 0.868s
INFO:tensorflow:{'Loss/classification_loss': 0.09288652,
'Loss/localization_loss': 0.039761636,
'Loss/regularization_loss': 0.22255816,
'Loss/total_loss': 0.3552063,
'learning_rate': 0.03918285}
I1022 08:03:11.396773 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.09288652,
'Loss/localization_loss': 0.039761636,
'Loss/regularization_loss': 0.22255816,
'Loss/total_loss': 0.3552063,
'learning_rate': 0.03918285}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4200 per-step time 0.828s
I1022 08:04:34.249456 139879681050496 model_lib_v2.py:707] Step 4200 per-step time 0.828s
INFO:tensorflow:{'Loss/classification_loss': 0.09193174,
'Loss/localization_loss': 0.040900607,
'Loss/regularization_loss': 0.22034822,
'Loss/total_loss': 0.3531806,
'learning_rate': 0.03910377}
I1022 08:04:34.249815 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.09193174,
'Loss/localization_loss': 0.040900607,
'Loss/regularization_loss': 0.22034822,
'Loss/total_loss': 0.3531806,
'learning_rate': 0.03910377}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4300 per-step time 0.834s
I1022 08:05:57.619144 139879681050496 model_lib_v2.py:707] Step 4300 per-step time 0.834s
INFO:tensorflow:{'Loss/classification_loss': 0.13346687,
'Loss/localization_loss': 0.070396334,
'Loss/regularization_loss': 0.21907604,
'Loss/total_loss': 0.42293924,
'learning_rate': 0.039021127}
I1022 08:05:57.619460 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.13346687,
'Loss/localization_loss': 0.070396334,
'Loss/regularization_loss': 0.21907604,
'Loss/total_loss': 0.42293924,
'learning_rate': 0.039021127}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4400 per-step time 0.831s
I1022 08:07:20.698074 139879681050496 model_lib_v2.py:707] Step 4400 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.08560679,
'Loss/localization_loss': 0.0419784,
'Loss/regularization_loss': 0.21642601,
'Loss/total_loss': 0.3440112,
'learning_rate': 0.03893494}
I1022 08:07:20.698388 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.08560679,
'Loss/localization_loss': 0.0419784,
'Loss/regularization_loss': 0.21642601,
'Loss/total_loss': 0.3440112,
'learning_rate': 0.03893494}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4500 per-step time 0.830s
I1022 08:08:43.718400 139879681050496 model_lib_v2.py:707] Step 4500 per-step time 0.830s
INFO:tensorflow:{'Loss/classification_loss': 0.09175523,
'Loss/localization_loss': 0.048504617,
'Loss/regularization_loss': 0.21354383,
'Loss/total_loss': 0.3538037,
'learning_rate': 0.03884522}
I1022 08:08:43.718742 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.09175523,
'Loss/localization_loss': 0.048504617,
'Loss/regularization_loss': 0.21354383,
'Loss/total_loss': 0.3538037,
'learning_rate': 0.03884522}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4600 per-step time 0.832s
I1022 08:10:06.878905 139879681050496 model_lib_v2.py:707] Step 4600 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.11036381,
'Loss/localization_loss': 0.068062976,
'Loss/regularization_loss': 0.21070144,
'Loss/total_loss': 0.3891282,
'learning_rate': 0.03875198}
I1022 08:10:06.879216 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.11036381,
'Loss/localization_loss': 0.068062976,
'Loss/regularization_loss': 0.21070144,
'Loss/total_loss': 0.3891282,
'learning_rate': 0.03875198}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4700 per-step time 0.831s
I1022 08:11:30.045587 139879681050496 model_lib_v2.py:707] Step 4700 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.121919185,
'Loss/localization_loss': 0.08389178,
'Loss/regularization_loss': 0.20877494,
'Loss/total_loss': 0.4145859,
'learning_rate': 0.038655244}
I1022 08:11:30.045999 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.121919185,
'Loss/localization_loss': 0.08389178,
'Loss/regularization_loss': 0.20877494,
'Loss/total_loss': 0.4145859,
'learning_rate': 0.038655244}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4800 per-step time 0.831s
I1022 08:12:53.153850 139879681050496 model_lib_v2.py:707] Step 4800 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.092807606,
'Loss/localization_loss': 0.038678125,
'Loss/regularization_loss': 0.2071936,
'Loss/total_loss': 0.3386793,
'learning_rate': 0.038555026}
I1022 08:12:53.154199 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.092807606,
'Loss/localization_loss': 0.038678125,
'Loss/regularization_loss': 0.2071936,
'Loss/total_loss': 0.3386793,
'learning_rate': 0.038555026}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4900 per-step time 0.832s
I1022 08:14:16.380736 139879681050496 model_lib_v2.py:707] Step 4900 per-step time 0.832s
INFO:tensorflow:{'Loss/classification_loss': 0.10224596,
'Loss/localization_loss': 0.03966514,
'Loss/regularization_loss': 0.2048985,
'Loss/total_loss': 0.3468096,
'learning_rate': 0.038451348}
I1022 08:14:16.381047 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.10224596,
'Loss/localization_loss': 0.03966514,
'Loss/regularization_loss': 0.2048985,
'Loss/total_loss': 0.3468096,
'learning_rate': 0.038451348}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 5000 per-step time 0.831s
I1022 08:15:39.533587 139879681050496 model_lib_v2.py:707] Step 5000 per-step time 0.831s
INFO:tensorflow:{'Loss/classification_loss': 0.07594406,
'Loss/localization_loss': 0.043222543,
'Loss/regularization_loss': 0.2028941,
'Loss/total_loss': 0.3220607,
'learning_rate': 0.038344227}
I1022 08:15:39.533959 139879681050496 model_lib_v2.py:708] {'Loss/classification_loss': 0.07594406,
'Loss/localization_loss': 0.043222543,
'Loss/regularization_loss': 0.2028941,
'Loss/total_loss': 0.3220607,
'learning_rate': 0.038344227}
!{command3} #training model ssd_mobilenet_v2_fpnlite
2022-10-22 08:16:30.498083: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 08:16:32.315047: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 08:16:32.315893: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 08:16:32.315921: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-10-22 08:16:40.174189: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I1022 08:16:40.204855 140525414066048 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 5000
I1022 08:16:40.211603 140525414066048 config_util.py:552] Maybe overwriting train_steps: 5000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 08:16:40.211765 140525414066048 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W1022 08:16:40.241138 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py:564: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
I1022 08:16:40.248225 140525414066048 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
I1022 08:16:40.248449 140525414066048 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 08:16:40.248543 140525414066048 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 08:16:40.248619 140525414066048 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 08:16:40.254668 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 08:16:40.271297 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 08:16:46.975223 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W1022 08:16:49.804359 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 08:16:51.366430 140525414066048 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.680912 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.683967 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.686936 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.688158 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.691776 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.692904 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.696614 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.697721 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.700406 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
I1022 08:17:22.701455 140525414066048 cross_device_ops.py:618] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
W1022 08:17:23.793449 140520961603328 deprecation.py:560] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 100 per-step time 0.803s
I1022 08:18:43.768594 140525414066048 model_lib_v2.py:707] Step 100 per-step time 0.803s
INFO:tensorflow:{'Loss/classification_loss': 0.3438223,
'Loss/localization_loss': 0.48876223,
'Loss/regularization_loss': 0.1513162,
'Loss/total_loss': 0.98390067,
'learning_rate': 0.0319994}
I1022 08:18:43.769010 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.3438223,
'Loss/localization_loss': 0.48876223,
'Loss/regularization_loss': 0.1513162,
'Loss/total_loss': 0.98390067,
'learning_rate': 0.0319994}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 200 per-step time 0.385s
I1022 08:19:22.196841 140525414066048 model_lib_v2.py:707] Step 200 per-step time 0.385s
INFO:tensorflow:{'Loss/classification_loss': 0.27512622,
'Loss/localization_loss': 0.32866278,
'Loss/regularization_loss': 0.15101817,
'Loss/total_loss': 0.7548071,
'learning_rate': 0.0373328}
I1022 08:19:22.197210 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.27512622,
'Loss/localization_loss': 0.32866278,
'Loss/regularization_loss': 0.15101817,
'Loss/total_loss': 0.7548071,
'learning_rate': 0.0373328}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 300 per-step time 0.390s
I1022 08:20:01.222112 140525414066048 model_lib_v2.py:707] Step 300 per-step time 0.390s
INFO:tensorflow:{'Loss/classification_loss': 0.23712069,
'Loss/localization_loss': 0.23779884,
'Loss/regularization_loss': 0.15070632,
'Loss/total_loss': 0.62562585,
'learning_rate': 0.0426662}
I1022 08:20:01.222598 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.23712069,
'Loss/localization_loss': 0.23779884,
'Loss/regularization_loss': 0.15070632,
'Loss/total_loss': 0.62562585,
'learning_rate': 0.0426662}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 400 per-step time 0.434s
I1022 08:20:44.611364 140525414066048 model_lib_v2.py:707] Step 400 per-step time 0.434s
INFO:tensorflow:{'Loss/classification_loss': 0.21792588,
'Loss/localization_loss': 0.23698339,
'Loss/regularization_loss': 0.1503694,
'Loss/total_loss': 0.6052787,
'learning_rate': 0.047999598}
I1022 08:20:44.611775 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.21792588,
'Loss/localization_loss': 0.23698339,
'Loss/regularization_loss': 0.1503694,
'Loss/total_loss': 0.6052787,
'learning_rate': 0.047999598}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 500 per-step time 0.401s
I1022 08:21:24.702573 140525414066048 model_lib_v2.py:707] Step 500 per-step time 0.401s
INFO:tensorflow:{'Loss/classification_loss': 0.21384913,
'Loss/localization_loss': 0.20985769,
'Loss/regularization_loss': 0.15001509,
'Loss/total_loss': 0.5737219,
'learning_rate': 0.053333}
I1022 08:21:24.702936 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.21384913,
'Loss/localization_loss': 0.20985769,
'Loss/regularization_loss': 0.15001509,
'Loss/total_loss': 0.5737219,
'learning_rate': 0.053333}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 600 per-step time 0.387s
I1022 08:22:03.437871 140525414066048 model_lib_v2.py:707] Step 600 per-step time 0.387s
INFO:tensorflow:{'Loss/classification_loss': 0.21193102,
'Loss/localization_loss': 0.19041273,
'Loss/regularization_loss': 0.14963448,
'Loss/total_loss': 0.55197823,
'learning_rate': 0.0586664}
I1022 08:22:03.438275 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.21193102,
'Loss/localization_loss': 0.19041273,
'Loss/regularization_loss': 0.14963448,
'Loss/total_loss': 0.55197823,
'learning_rate': 0.0586664}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 700 per-step time 0.436s
I1022 08:22:47.010370 140525414066048 model_lib_v2.py:707] Step 700 per-step time 0.436s
INFO:tensorflow:{'Loss/classification_loss': 0.22001687,
'Loss/localization_loss': 0.18140619,
'Loss/regularization_loss': 0.14926842,
'Loss/total_loss': 0.5506915,
'learning_rate': 0.0639998}
I1022 08:22:47.010764 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.22001687,
'Loss/localization_loss': 0.18140619,
'Loss/regularization_loss': 0.14926842,
'Loss/total_loss': 0.5506915,
'learning_rate': 0.0639998}
INFO:tensorflow:Step 800 per-step time 0.394s
I1022 08:23:26.401469 140525414066048 model_lib_v2.py:707] Step 800 per-step time 0.394s
INFO:tensorflow:{'Loss/classification_loss': 0.22051013,
'Loss/localization_loss': 0.15678324,
'Loss/regularization_loss': 0.14888902,
'Loss/total_loss': 0.52618235,
'learning_rate': 0.069333196}
I1022 08:23:26.401845 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.22051013,
'Loss/localization_loss': 0.15678324,
'Loss/regularization_loss': 0.14888902,
'Loss/total_loss': 0.52618235,
'learning_rate': 0.069333196}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 900 per-step time 0.393s
I1022 08:24:05.708821 140525414066048 model_lib_v2.py:707] Step 900 per-step time 0.393s
INFO:tensorflow:{'Loss/classification_loss': 0.1980735,
'Loss/localization_loss': 0.15877254,
'Loss/regularization_loss': 0.14852786,
'Loss/total_loss': 0.5053739,
'learning_rate': 0.074666604}
I1022 08:24:05.709327 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1980735,
'Loss/localization_loss': 0.15877254,
'Loss/regularization_loss': 0.14852786,
'Loss/total_loss': 0.5053739,
'learning_rate': 0.074666604}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1000 per-step time 0.429s
I1022 08:24:48.635200 140525414066048 model_lib_v2.py:707] Step 1000 per-step time 0.429s
INFO:tensorflow:{'Loss/classification_loss': 0.2392923,
'Loss/localization_loss': 0.20119645,
'Loss/regularization_loss': 0.14808063,
'Loss/total_loss': 0.5885694,
'learning_rate': 0.08}
I1022 08:24:48.636220 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.2392923,
'Loss/localization_loss': 0.20119645,
'Loss/regularization_loss': 0.14808063,
'Loss/total_loss': 0.5885694,
'learning_rate': 0.08}
INFO:tensorflow:Step 1100 per-step time 0.405s
I1022 08:25:29.144641 140525414066048 model_lib_v2.py:707] Step 1100 per-step time 0.405s
INFO:tensorflow:{'Loss/classification_loss': 0.18097553,
'Loss/localization_loss': 0.14564691,
'Loss/regularization_loss': 0.1476527,
'Loss/total_loss': 0.4742751,
'learning_rate': 0.07999918}
I1022 08:25:29.144994 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.18097553,
'Loss/localization_loss': 0.14564691,
'Loss/regularization_loss': 0.1476527,
'Loss/total_loss': 0.4742751,
'learning_rate': 0.07999918}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1200 per-step time 0.384s
I1022 08:26:07.516452 140525414066048 model_lib_v2.py:707] Step 1200 per-step time 0.384s
INFO:tensorflow:{'Loss/classification_loss': 0.19227201,
'Loss/localization_loss': 0.1519817,
'Loss/regularization_loss': 0.14720266,
'Loss/total_loss': 0.4914564,
'learning_rate': 0.079996705}
I1022 08:26:07.516825 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.19227201,
'Loss/localization_loss': 0.1519817,
'Loss/regularization_loss': 0.14720266,
'Loss/total_loss': 0.4914564,
'learning_rate': 0.079996705}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1300 per-step time 0.410s
I1022 08:26:48.565726 140525414066048 model_lib_v2.py:707] Step 1300 per-step time 0.410s
INFO:tensorflow:{'Loss/classification_loss': 0.17343895,
'Loss/localization_loss': 0.13916576,
'Loss/regularization_loss': 0.14675908,
'Loss/total_loss': 0.45936382,
'learning_rate': 0.0799926}
I1022 08:26:48.566181 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.17343895,
'Loss/localization_loss': 0.13916576,
'Loss/regularization_loss': 0.14675908,
'Loss/total_loss': 0.45936382,
'learning_rate': 0.0799926}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1400 per-step time 0.426s
I1022 08:27:31.168790 140525414066048 model_lib_v2.py:707] Step 1400 per-step time 0.426s
INFO:tensorflow:{'Loss/classification_loss': 0.17316994,
'Loss/localization_loss': 0.15125245,
'Loss/regularization_loss': 0.14628868,
'Loss/total_loss': 0.47071105,
'learning_rate': 0.07998685}
I1022 08:27:31.169159 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.17316994,
'Loss/localization_loss': 0.15125245,
'Loss/regularization_loss': 0.14628868,
'Loss/total_loss': 0.47071105,
'learning_rate': 0.07998685}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1500 per-step time 0.405s
I1022 08:28:11.695880 140525414066048 model_lib_v2.py:707] Step 1500 per-step time 0.405s
INFO:tensorflow:{'Loss/classification_loss': 0.1930885,
'Loss/localization_loss': 0.14556152,
'Loss/regularization_loss': 0.14588512,
'Loss/total_loss': 0.48453516,
'learning_rate': 0.07997945}
I1022 08:28:11.696369 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1930885,
'Loss/localization_loss': 0.14556152,
'Loss/regularization_loss': 0.14588512,
'Loss/total_loss': 0.48453516,
'learning_rate': 0.07997945}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1600 per-step time 0.395s
I1022 08:28:51.174097 140525414066048 model_lib_v2.py:707] Step 1600 per-step time 0.395s
INFO:tensorflow:{'Loss/classification_loss': 0.14093411,
'Loss/localization_loss': 0.10598748,
'Loss/regularization_loss': 0.14543135,
'Loss/total_loss': 0.39235294,
'learning_rate': 0.079970405}
I1022 08:28:51.174486 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14093411,
'Loss/localization_loss': 0.10598748,
'Loss/regularization_loss': 0.14543135,
'Loss/total_loss': 0.39235294,
'learning_rate': 0.079970405}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1700 per-step time 0.445s
I1022 08:29:35.636950 140525414066048 model_lib_v2.py:707] Step 1700 per-step time 0.445s
INFO:tensorflow:{'Loss/classification_loss': 0.18064375,
'Loss/localization_loss': 0.1452112,
'Loss/regularization_loss': 0.14501756,
'Loss/total_loss': 0.47087252,
'learning_rate': 0.07995972}
I1022 08:29:35.637333 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.18064375,
'Loss/localization_loss': 0.1452112,
'Loss/regularization_loss': 0.14501756,
'Loss/total_loss': 0.47087252,
'learning_rate': 0.07995972}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1800 per-step time 0.397s
I1022 08:30:15.335830 140525414066048 model_lib_v2.py:707] Step 1800 per-step time 0.397s
INFO:tensorflow:{'Loss/classification_loss': 0.21031162,
'Loss/localization_loss': 0.14690556,
'Loss/regularization_loss': 0.1445187,
'Loss/total_loss': 0.5017359,
'learning_rate': 0.0799474}
I1022 08:30:15.336334 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.21031162,
'Loss/localization_loss': 0.14690556,
'Loss/regularization_loss': 0.1445187,
'Loss/total_loss': 0.5017359,
'learning_rate': 0.0799474}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 1900 per-step time 0.406s
I1022 08:30:55.904014 140525414066048 model_lib_v2.py:707] Step 1900 per-step time 0.406s
INFO:tensorflow:{'Loss/classification_loss': 0.20601183,
'Loss/localization_loss': 0.1940493,
'Loss/regularization_loss': 0.14413789,
'Loss/total_loss': 0.544199,
'learning_rate': 0.07993342}
I1022 08:30:55.904417 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.20601183,
'Loss/localization_loss': 0.1940493,
'Loss/regularization_loss': 0.14413789,
'Loss/total_loss': 0.544199,
'learning_rate': 0.07993342}
INFO:tensorflow:Step 2000 per-step time 0.437s
I1022 08:31:39.630940 140525414066048 model_lib_v2.py:707] Step 2000 per-step time 0.437s
INFO:tensorflow:{'Loss/classification_loss': 0.1478098,
'Loss/localization_loss': 0.11282443,
'Loss/regularization_loss': 0.14367674,
'Loss/total_loss': 0.404311,
'learning_rate': 0.07991781}
I1022 08:31:39.631298 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1478098,
'Loss/localization_loss': 0.11282443,
'Loss/regularization_loss': 0.14367674,
'Loss/total_loss': 0.404311,
'learning_rate': 0.07991781}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2100 per-step time 0.408s
I1022 08:32:20.454411 140525414066048 model_lib_v2.py:707] Step 2100 per-step time 0.408s
INFO:tensorflow:{'Loss/classification_loss': 0.16171753,
'Loss/localization_loss': 0.13802294,
'Loss/regularization_loss': 0.14314714,
'Loss/total_loss': 0.44288763,
'learning_rate': 0.07990056}
I1022 08:32:20.454823 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.16171753,
'Loss/localization_loss': 0.13802294,
'Loss/regularization_loss': 0.14314714,
'Loss/total_loss': 0.44288763,
'learning_rate': 0.07990056}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2200 per-step time 0.392s
I1022 08:32:59.687992 140525414066048 model_lib_v2.py:707] Step 2200 per-step time 0.392s
INFO:tensorflow:{'Loss/classification_loss': 0.13764937,
'Loss/localization_loss': 0.08955069,
'Loss/regularization_loss': 0.14264598,
'Loss/total_loss': 0.36984605,
'learning_rate': 0.07988167}
I1022 08:32:59.688430 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13764937,
'Loss/localization_loss': 0.08955069,
'Loss/regularization_loss': 0.14264598,
'Loss/total_loss': 0.36984605,
'learning_rate': 0.07988167}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2300 per-step time 0.389s
I1022 08:33:38.584336 140525414066048 model_lib_v2.py:707] Step 2300 per-step time 0.389s
INFO:tensorflow:{'Loss/classification_loss': 0.13642833,
'Loss/localization_loss': 0.11173704,
'Loss/regularization_loss': 0.14214547,
'Loss/total_loss': 0.39031082,
'learning_rate': 0.07986114}
I1022 08:33:38.584726 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13642833,
'Loss/localization_loss': 0.11173704,
'Loss/regularization_loss': 0.14214547,
'Loss/total_loss': 0.39031082,
'learning_rate': 0.07986114}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2400 per-step time 0.461s
I1022 08:34:24.720248 140525414066048 model_lib_v2.py:707] Step 2400 per-step time 0.461s
INFO:tensorflow:{'Loss/classification_loss': 0.15837803,
'Loss/localization_loss': 0.13044882,
'Loss/regularization_loss': 0.14159718,
'Loss/total_loss': 0.43042403,
'learning_rate': 0.07983897}
I1022 08:34:24.720643 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.15837803,
'Loss/localization_loss': 0.13044882,
'Loss/regularization_loss': 0.14159718,
'Loss/total_loss': 0.43042403,
'learning_rate': 0.07983897}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2500 per-step time 0.399s
I1022 08:35:04.609665 140525414066048 model_lib_v2.py:707] Step 2500 per-step time 0.399s
INFO:tensorflow:{'Loss/classification_loss': 0.13694467,
'Loss/localization_loss': 0.11595314,
'Loss/regularization_loss': 0.1410631,
'Loss/total_loss': 0.3939609,
'learning_rate': 0.079815164}
I1022 08:35:04.610328 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13694467,
'Loss/localization_loss': 0.11595314,
'Loss/regularization_loss': 0.1410631,
'Loss/total_loss': 0.3939609,
'learning_rate': 0.079815164}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2600 per-step time 0.396s
I1022 08:35:44.178110 140525414066048 model_lib_v2.py:707] Step 2600 per-step time 0.396s
INFO:tensorflow:{'Loss/classification_loss': 0.14988247,
'Loss/localization_loss': 0.09409117,
'Loss/regularization_loss': 0.14052862,
'Loss/total_loss': 0.38450226,
'learning_rate': 0.07978972}
I1022 08:35:44.178515 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14988247,
'Loss/localization_loss': 0.09409117,
'Loss/regularization_loss': 0.14052862,
'Loss/total_loss': 0.38450226,
'learning_rate': 0.07978972}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2700 per-step time 0.438s
I1022 08:36:27.962055 140525414066048 model_lib_v2.py:707] Step 2700 per-step time 0.438s
INFO:tensorflow:{'Loss/classification_loss': 0.13587989,
'Loss/localization_loss': 0.104847826,
'Loss/regularization_loss': 0.14005439,
'Loss/total_loss': 0.38078213,
'learning_rate': 0.07976264}
I1022 08:36:27.962528 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13587989,
'Loss/localization_loss': 0.104847826,
'Loss/regularization_loss': 0.14005439,
'Loss/total_loss': 0.38078213,
'learning_rate': 0.07976264}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2800 per-step time 0.396s
I1022 08:37:07.568414 140525414066048 model_lib_v2.py:707] Step 2800 per-step time 0.396s
INFO:tensorflow:{'Loss/classification_loss': 0.1370515,
'Loss/localization_loss': 0.09958906,
'Loss/regularization_loss': 0.13958341,
'Loss/total_loss': 0.37622395,
'learning_rate': 0.07973392}
I1022 08:37:07.568879 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1370515,
'Loss/localization_loss': 0.09958906,
'Loss/regularization_loss': 0.13958341,
'Loss/total_loss': 0.37622395,
'learning_rate': 0.07973392}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 2900 per-step time 0.417s
I1022 08:37:49.299717 140525414066048 model_lib_v2.py:707] Step 2900 per-step time 0.417s
INFO:tensorflow:{'Loss/classification_loss': 0.12913522,
'Loss/localization_loss': 0.08833077,
'Loss/regularization_loss': 0.13907732,
'Loss/total_loss': 0.3565433,
'learning_rate': 0.07970358}
I1022 08:37:49.300156 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12913522,
'Loss/localization_loss': 0.08833077,
'Loss/regularization_loss': 0.13907732,
'Loss/total_loss': 0.3565433,
'learning_rate': 0.07970358}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3000 per-step time 0.444s
I1022 08:38:33.763848 140525414066048 model_lib_v2.py:707] Step 3000 per-step time 0.444s
INFO:tensorflow:{'Loss/classification_loss': 0.14529026,
'Loss/localization_loss': 0.0975895,
'Loss/regularization_loss': 0.1385943,
'Loss/total_loss': 0.38147405,
'learning_rate': 0.0796716}
I1022 08:38:33.764255 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14529026,
'Loss/localization_loss': 0.0975895,
'Loss/regularization_loss': 0.1385943,
'Loss/total_loss': 0.38147405,
'learning_rate': 0.0796716}
INFO:tensorflow:Step 3100 per-step time 0.393s
I1022 08:39:13.052479 140525414066048 model_lib_v2.py:707] Step 3100 per-step time 0.393s
INFO:tensorflow:{'Loss/classification_loss': 0.14999537,
'Loss/localization_loss': 0.09613367,
'Loss/regularization_loss': 0.1381027,
'Loss/total_loss': 0.38423175,
'learning_rate': 0.07963799}
I1022 08:39:13.052859 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14999537,
'Loss/localization_loss': 0.09613367,
'Loss/regularization_loss': 0.1381027,
'Loss/total_loss': 0.38423175,
'learning_rate': 0.07963799}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3200 per-step time 0.411s
I1022 08:39:54.188051 140525414066048 model_lib_v2.py:707] Step 3200 per-step time 0.411s
INFO:tensorflow:{'Loss/classification_loss': 0.11946576,
'Loss/localization_loss': 0.06680734,
'Loss/regularization_loss': 0.13764901,
'Loss/total_loss': 0.3239221,
'learning_rate': 0.07960275}
I1022 08:39:54.188459 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.11946576,
'Loss/localization_loss': 0.06680734,
'Loss/regularization_loss': 0.13764901,
'Loss/total_loss': 0.3239221,
'learning_rate': 0.07960275}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3300 per-step time 0.425s
I1022 08:40:36.728411 140525414066048 model_lib_v2.py:707] Step 3300 per-step time 0.425s
INFO:tensorflow:{'Loss/classification_loss': 0.11813853,
'Loss/localization_loss': 0.07599521,
'Loss/regularization_loss': 0.13714638,
'Loss/total_loss': 0.3312801,
'learning_rate': 0.07956588}
I1022 08:40:36.728806 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.11813853,
'Loss/localization_loss': 0.07599521,
'Loss/regularization_loss': 0.13714638,
'Loss/total_loss': 0.3312801,
'learning_rate': 0.07956588}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3400 per-step time 0.410s
I1022 08:41:17.742395 140525414066048 model_lib_v2.py:707] Step 3400 per-step time 0.410s
INFO:tensorflow:{'Loss/classification_loss': 0.13763309,
'Loss/localization_loss': 0.09047551,
'Loss/regularization_loss': 0.13662173,
'Loss/total_loss': 0.3647303,
'learning_rate': 0.079527386}
I1022 08:41:17.749789 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13763309,
'Loss/localization_loss': 0.09047551,
'Loss/regularization_loss': 0.13662173,
'Loss/total_loss': 0.3647303,
'learning_rate': 0.079527386}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3500 per-step time 0.385s
I1022 08:41:56.209177 140525414066048 model_lib_v2.py:707] Step 3500 per-step time 0.385s
INFO:tensorflow:{'Loss/classification_loss': 0.14120394,
'Loss/localization_loss': 0.09000268,
'Loss/regularization_loss': 0.13611539,
'Loss/total_loss': 0.36732203,
'learning_rate': 0.07948727}
I1022 08:41:56.209602 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14120394,
'Loss/localization_loss': 0.09000268,
'Loss/regularization_loss': 0.13611539,
'Loss/total_loss': 0.36732203,
'learning_rate': 0.07948727}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3600 per-step time 0.407s
I1022 08:42:36.941426 140525414066048 model_lib_v2.py:707] Step 3600 per-step time 0.407s
INFO:tensorflow:{'Loss/classification_loss': 0.12713811,
'Loss/localization_loss': 0.08550117,
'Loss/regularization_loss': 0.13558374,
'Loss/total_loss': 0.34822303,
'learning_rate': 0.079445526}
I1022 08:42:36.941816 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12713811,
'Loss/localization_loss': 0.08550117,
'Loss/regularization_loss': 0.13558374,
'Loss/total_loss': 0.34822303,
'learning_rate': 0.079445526}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3700 per-step time 0.439s
I1022 08:43:20.814078 140525414066048 model_lib_v2.py:707] Step 3700 per-step time 0.439s
INFO:tensorflow:{'Loss/classification_loss': 0.10903574,
'Loss/localization_loss': 0.05859272,
'Loss/regularization_loss': 0.13505958,
'Loss/total_loss': 0.30268806,
'learning_rate': 0.07940216}
I1022 08:43:20.814546 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.10903574,
'Loss/localization_loss': 0.05859272,
'Loss/regularization_loss': 0.13505958,
'Loss/total_loss': 0.30268806,
'learning_rate': 0.07940216}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3800 per-step time 0.400s
I1022 08:44:00.823457 140525414066048 model_lib_v2.py:707] Step 3800 per-step time 0.400s
INFO:tensorflow:{'Loss/classification_loss': 0.12217634,
'Loss/localization_loss': 0.07037391,
'Loss/regularization_loss': 0.13455956,
'Loss/total_loss': 0.3271098,
'learning_rate': 0.079357184}
I1022 08:44:00.823873 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12217634,
'Loss/localization_loss': 0.07037391,
'Loss/regularization_loss': 0.13455956,
'Loss/total_loss': 0.3271098,
'learning_rate': 0.079357184}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 3900 per-step time 0.401s
I1022 08:44:40.920809 140525414066048 model_lib_v2.py:707] Step 3900 per-step time 0.401s
INFO:tensorflow:{'Loss/classification_loss': 0.14481393,
'Loss/localization_loss': 0.09681995,
'Loss/regularization_loss': 0.13406794,
'Loss/total_loss': 0.3757018,
'learning_rate': 0.07931058}
I1022 08:44:40.921190 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.14481393,
'Loss/localization_loss': 0.09681995,
'Loss/regularization_loss': 0.13406794,
'Loss/total_loss': 0.3757018,
'learning_rate': 0.07931058}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4000 per-step time 0.442s
I1022 08:45:25.096124 140525414066048 model_lib_v2.py:707] Step 4000 per-step time 0.442s
INFO:tensorflow:{'Loss/classification_loss': 0.091286264,
'Loss/localization_loss': 0.052766953,
'Loss/regularization_loss': 0.13358371,
'Loss/total_loss': 0.27763695,
'learning_rate': 0.07926236}
I1022 08:45:25.096596 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.091286264,
'Loss/localization_loss': 0.052766953,
'Loss/regularization_loss': 0.13358371,
'Loss/total_loss': 0.27763695,
'learning_rate': 0.07926236}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4100 per-step time 0.398s
I1022 08:46:04.923230 140525414066048 model_lib_v2.py:707] Step 4100 per-step time 0.398s
INFO:tensorflow:{'Loss/classification_loss': 0.12385262,
'Loss/localization_loss': 0.084478356,
'Loss/regularization_loss': 0.13308446,
'Loss/total_loss': 0.34141544,
'learning_rate': 0.07921253}
I1022 08:46:04.923666 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12385262,
'Loss/localization_loss': 0.084478356,
'Loss/regularization_loss': 0.13308446,
'Loss/total_loss': 0.34141544,
'learning_rate': 0.07921253}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4200 per-step time 0.403s
I1022 08:46:45.199124 140525414066048 model_lib_v2.py:707] Step 4200 per-step time 0.403s
INFO:tensorflow:{'Loss/classification_loss': 0.12716748,
'Loss/localization_loss': 0.08877495,
'Loss/regularization_loss': 0.13262549,
'Loss/total_loss': 0.3485679,
'learning_rate': 0.07916109}
I1022 08:46:45.199545 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12716748,
'Loss/localization_loss': 0.08877495,
'Loss/regularization_loss': 0.13262549,
'Loss/total_loss': 0.3485679,
'learning_rate': 0.07916109}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4300 per-step time 0.447s
I1022 08:47:29.887127 140525414066048 model_lib_v2.py:707] Step 4300 per-step time 0.447s
INFO:tensorflow:{'Loss/classification_loss': 0.1333708,
'Loss/localization_loss': 0.07785837,
'Loss/regularization_loss': 0.13213177,
'Loss/total_loss': 0.34336096,
'learning_rate': 0.07910804}
I1022 08:47:29.887498 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1333708,
'Loss/localization_loss': 0.07785837,
'Loss/regularization_loss': 0.13213177,
'Loss/total_loss': 0.34336096,
'learning_rate': 0.07910804}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4400 per-step time 0.407s
I1022 08:48:10.585683 140525414066048 model_lib_v2.py:707] Step 4400 per-step time 0.407s
INFO:tensorflow:{'Loss/classification_loss': 0.10802802,
'Loss/localization_loss': 0.06383533,
'Loss/regularization_loss': 0.13174272,
'Loss/total_loss': 0.30360606,
'learning_rate': 0.07905338}
I1022 08:48:10.586097 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.10802802,
'Loss/localization_loss': 0.06383533,
'Loss/regularization_loss': 0.13174272,
'Loss/total_loss': 0.30360606,
'learning_rate': 0.07905338}
INFO:tensorflow:Step 4500 per-step time 0.402s
I1022 08:48:50.766477 140525414066048 model_lib_v2.py:707] Step 4500 per-step time 0.402s
INFO:tensorflow:{'Loss/classification_loss': 0.13216874,
'Loss/localization_loss': 0.0925746,
'Loss/regularization_loss': 0.13125093,
'Loss/total_loss': 0.35599428,
'learning_rate': 0.07899711}
I1022 08:48:50.766865 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.13216874,
'Loss/localization_loss': 0.0925746,
'Loss/regularization_loss': 0.13125093,
'Loss/total_loss': 0.35599428,
'learning_rate': 0.07899711}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4600 per-step time 0.439s
I1022 08:49:34.684883 140525414066048 model_lib_v2.py:707] Step 4600 per-step time 0.439s
INFO:tensorflow:{'Loss/classification_loss': 0.11730804,
'Loss/localization_loss': 0.06938836,
'Loss/regularization_loss': 0.13077572,
'Loss/total_loss': 0.31747213,
'learning_rate': 0.078939244}
I1022 08:49:34.685268 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.11730804,
'Loss/localization_loss': 0.06938836,
'Loss/regularization_loss': 0.13077572,
'Loss/total_loss': 0.31747213,
'learning_rate': 0.078939244}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4700 per-step time 0.402s
I1022 08:50:14.844967 140525414066048 model_lib_v2.py:707] Step 4700 per-step time 0.402s
INFO:tensorflow:{'Loss/classification_loss': 0.12022225,
'Loss/localization_loss': 0.072994314,
'Loss/regularization_loss': 0.1302751,
'Loss/total_loss': 0.32349166,
'learning_rate': 0.07887978}
I1022 08:50:14.845458 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12022225,
'Loss/localization_loss': 0.072994314,
'Loss/regularization_loss': 0.1302751,
'Loss/total_loss': 0.32349166,
'learning_rate': 0.07887978}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4800 per-step time 0.412s
I1022 08:50:56.082613 140525414066048 model_lib_v2.py:707] Step 4800 per-step time 0.412s
INFO:tensorflow:{'Loss/classification_loss': 0.12228542,
'Loss/localization_loss': 0.083615534,
'Loss/regularization_loss': 0.12979692,
'Loss/total_loss': 0.3356979,
'learning_rate': 0.07881871}
I1022 08:50:56.083038 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.12228542,
'Loss/localization_loss': 0.083615534,
'Loss/regularization_loss': 0.12979692,
'Loss/total_loss': 0.3356979,
'learning_rate': 0.07881871}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 4900 per-step time 0.406s
I1022 08:51:36.719670 140525414066048 model_lib_v2.py:707] Step 4900 per-step time 0.406s
INFO:tensorflow:{'Loss/classification_loss': 0.1328084,
'Loss/localization_loss': 0.093810126,
'Loss/regularization_loss': 0.12930267,
'Loss/total_loss': 0.3559212,
'learning_rate': 0.07875605}
I1022 08:51:36.720081 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.1328084,
'Loss/localization_loss': 0.093810126,
'Loss/regularization_loss': 0.12930267,
'Loss/total_loss': 0.3559212,
'learning_rate': 0.07875605}
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
Corrupt JPEG data: 12658 extraneous bytes before marker 0xd2
INFO:tensorflow:Step 5000 per-step time 0.450s
I1022 08:52:21.779984 140525414066048 model_lib_v2.py:707] Step 5000 per-step time 0.450s
INFO:tensorflow:{'Loss/classification_loss': 0.11165063,
'Loss/localization_loss': 0.063766755,
'Loss/regularization_loss': 0.12881169,
'Loss/total_loss': 0.30422908,
'learning_rate': 0.078691795}
I1022 08:52:21.780869 140525414066048 model_lib_v2.py:708] {'Loss/classification_loss': 0.11165063,
'Loss/localization_loss': 0.063766755,
'Loss/regularization_loss': 0.12881169,
'Loss/total_loss': 0.30422908,
'learning_rate': 0.078691795}
Firstly, let’s start with a brief explanation of what the evaluation process does. While the training process runs, it will occasionally create checkpoint files inside the training_demo/training folder, which correspond to snapshots of the model at given steps. When a set of such new checkpoint files is generated, the evaluation process uses these files and evaluates how well the model performs in detecting objects in the test dataset. The results of this evaluation are summarised in the form of some metrics, which can be examined over time.# 7. Evaluate the Model
command = "python {} --model_dir={} --pipeline_config_path={} --checkpoint_dir={}".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH'],files['PIPELINE_CONFIG'], paths['CHECKPOINT_PATH'])
command2 = "python {} --model_dir={} --pipeline_config_path={} --checkpoint_dir={}".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH2'],files['PIPELINE_CONFIG2'], paths['CHECKPOINT_PATH2'])
command3 = "python {} --model_dir={} --pipeline_config_path={} --checkpoint_dir={}".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH3'],files['PIPELINE_CONFIG3'], paths['CHECKPOINT_PATH3'])
print(command)
print(command2)
print(command3)
python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_mobilenet_v1 --pipeline_config_path=Tensorflow/workspace/models/ssd_mobilenet_v1/pipeline.config --checkpoint_dir=Tensorflow/workspace/models/ssd_mobilenet_v1 python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_resnet101_v1 --pipeline_config_path=Tensorflow/workspace/models/ssd_resnet101_v1/pipeline.config --checkpoint_dir=Tensorflow/workspace/models/ssd_resnet101_v1 python Tensorflow/models/research/object_detection/model_main_tf2.py --model_dir=Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite --pipeline_config_path=Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/pipeline.config --checkpoint_dir=Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite
!{command} ##evaluating model ssd_mobilenet_v1
2022-10-22 08:52:28.991832: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 08:52:29.899650: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 08:52:29.899831: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 08:52:29.899855: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
W1022 08:52:32.532302 139779506169728 model_lib_v2.py:1090] Forced number of epochs for all eval validations to be 1.
INFO:tensorflow:Maybe overwriting sample_1_of_n_eval_examples: None
I1022 08:52:32.532544 139779506169728 config_util.py:552] Maybe overwriting sample_1_of_n_eval_examples: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 08:52:32.532632 139779506169728 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Maybe overwriting eval_num_epochs: 1
I1022 08:52:32.532715 139779506169728 config_util.py:552] Maybe overwriting eval_num_epochs: 1
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
W1022 08:52:32.532831 139779506169728 model_lib_v2.py:1110] Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
2022-10-22 08:52:33.357315: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
I1022 08:52:33.405523 139779506169728 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
I1022 08:52:33.405792 139779506169728 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 08:52:33.405896 139779506169728 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 08:52:33.405968 139779506169728 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 08:52:33.409167 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 08:52:33.426579 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 08:52:37.245789 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 08:52:38.346986 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1
I1022 08:52:40.785556 139779506169728 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1
INFO:tensorflow:Found new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1/ckpt-6
I1022 08:52:40.787784 139779506169728 checkpoint_utils.py:151] Found new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1/ckpt-6
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
Corrupt JPEG data: premature end of data segment
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 08:53:04.833173 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Finished eval step 0
I1022 08:53:04.860365 139779506169728 model_lib_v2.py:966] Finished eval step 0
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
W1022 08:53:05.232712 139779506169728 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
INFO:tensorflow:Performing evaluation on 36 images.
I1022 08:55:02.976320 139779506169728 coco_evaluation.py:293] Performing evaluation on 36 images.
creating index...
index created!
INFO:tensorflow:Loading and preparing annotation results...
I1022 08:55:02.978391 139779506169728 coco_tools.py:116] Loading and preparing annotation results...
INFO:tensorflow:DONE (t=0.00s)
I1022 08:55:02.980286 139779506169728 coco_tools.py:138] DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=6.66s).
Accumulating evaluation results...
DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.213
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.419
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.194
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.038
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.240
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.049
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.348
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.076
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.392
INFO:tensorflow:Eval metrics at step 5000
I1022 08:55:09.700607 139779506169728 model_lib_v2.py:1015] Eval metrics at step 5000
INFO:tensorflow: + DetectionBoxes_Precision/mAP: 0.212816
I1022 08:55:09.723646 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP: 0.212816
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.50IOU: 0.418929
I1022 08:55:09.725456 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.50IOU: 0.418929
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.75IOU: 0.193583
I1022 08:55:09.727020 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.75IOU: 0.193583
INFO:tensorflow: + DetectionBoxes_Precision/mAP (small): 0.000000
I1022 08:55:09.728473 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Precision/mAP (medium): 0.038140
I1022 08:55:09.729931 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (medium): 0.038140
INFO:tensorflow: + DetectionBoxes_Precision/mAP (large): 0.240142
I1022 08:55:09.731369 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (large): 0.240142
INFO:tensorflow: + DetectionBoxes_Recall/AR@1: 0.005180
I1022 08:55:09.732817 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@1: 0.005180
INFO:tensorflow: + DetectionBoxes_Recall/AR@10: 0.048620
I1022 08:55:09.734413 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@10: 0.048620
INFO:tensorflow: + DetectionBoxes_Recall/AR@100: 0.347720
I1022 08:55:09.735818 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100: 0.347720
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (small): 0.000000
I1022 08:55:09.737025 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (medium): 0.075703
I1022 08:55:09.738480 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (medium): 0.075703
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (large): 0.392204
I1022 08:55:09.741117 139779506169728 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (large): 0.392204
INFO:tensorflow: + Loss/localization_loss: 0.247106
I1022 08:55:09.742219 139779506169728 model_lib_v2.py:1018] + Loss/localization_loss: 0.247106
INFO:tensorflow: + Loss/classification_loss: 0.371656
I1022 08:55:09.743361 139779506169728 model_lib_v2.py:1018] + Loss/classification_loss: 0.371656
INFO:tensorflow: + Loss/regularization_loss: 0.682506
I1022 08:55:09.744552 139779506169728 model_lib_v2.py:1018] + Loss/regularization_loss: 0.682506
INFO:tensorflow: + Loss/total_loss: 1.301268
I1022 08:55:09.745680 139779506169728 model_lib_v2.py:1018] + Loss/total_loss: 1.301268
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1
I1022 08:57:40.851524 139779506169728 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v1
INFO:tensorflow:Timed-out waiting for a checkpoint.
I1022 09:57:40.625432 139779506169728 checkpoint_utils.py:205] Timed-out waiting for a checkpoint.
!{command2}##evaluating model ssd_restnet_v1
2022-10-22 10:30:57.220829: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 10:30:57.992227: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 10:30:57.992336: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 10:30:57.992356: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
W1022 10:31:00.273072 139923825096576 model_lib_v2.py:1090] Forced number of epochs for all eval validations to be 1.
INFO:tensorflow:Maybe overwriting sample_1_of_n_eval_examples: None
I1022 10:31:00.273283 139923825096576 config_util.py:552] Maybe overwriting sample_1_of_n_eval_examples: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 10:31:00.273369 139923825096576 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Maybe overwriting eval_num_epochs: 1
I1022 10:31:00.273463 139923825096576 config_util.py:552] Maybe overwriting eval_num_epochs: 1
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
W1022 10:31:00.273574 139923825096576 model_lib_v2.py:1110] Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
2022-10-22 10:31:01.110603: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
I1022 10:31:01.158862 139923825096576 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
I1022 10:31:01.159109 139923825096576 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 10:31:01.159218 139923825096576 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 10:31:01.159286 139923825096576 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 10:31:01.162654 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 10:31:01.182459 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 10:31:05.345580 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 10:31:06.829797 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1
I1022 10:31:09.346080 139923825096576 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1
INFO:tensorflow:Found new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1/ckpt-6
I1022 10:31:09.347178 139923825096576 checkpoint_utils.py:151] Found new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1/ckpt-6
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
Corrupt JPEG data: premature end of data segment
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 10:31:38.975689 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Finished eval step 0
I1022 10:31:39.002951 139923825096576 model_lib_v2.py:966] Finished eval step 0
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
W1022 10:31:39.373594 139923825096576 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
INFO:tensorflow:Performing evaluation on 36 images.
I1022 10:32:49.496429 139923825096576 coco_evaluation.py:293] Performing evaluation on 36 images.
creating index...
index created!
INFO:tensorflow:Loading and preparing annotation results...
I1022 10:32:49.498231 139923825096576 coco_tools.py:116] Loading and preparing annotation results...
INFO:tensorflow:DONE (t=0.00s)
I1022 10:32:49.500089 139923825096576 coco_tools.py:138] DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=6.94s).
Accumulating evaluation results...
DONE (t=0.03s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.204
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.387
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.200
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.045
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.228
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.047
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.326
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.077
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.366
INFO:tensorflow:Eval metrics at step 5000
I1022 10:32:56.491014 139923825096576 model_lib_v2.py:1015] Eval metrics at step 5000
INFO:tensorflow: + DetectionBoxes_Precision/mAP: 0.203593
I1022 10:32:56.511962 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP: 0.203593
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.50IOU: 0.386503
I1022 10:32:56.513658 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.50IOU: 0.386503
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.75IOU: 0.200488
I1022 10:32:56.515456 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.75IOU: 0.200488
INFO:tensorflow: + DetectionBoxes_Precision/mAP (small): 0.000000
I1022 10:32:56.517020 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Precision/mAP (medium): 0.045257
I1022 10:32:56.518636 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (medium): 0.045257
INFO:tensorflow: + DetectionBoxes_Precision/mAP (large): 0.228449
I1022 10:32:56.520135 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (large): 0.228449
INFO:tensorflow: + DetectionBoxes_Recall/AR@1: 0.005321
I1022 10:32:56.521566 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@1: 0.005321
INFO:tensorflow: + DetectionBoxes_Recall/AR@10: 0.046988
I1022 10:32:56.522983 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@10: 0.046988
INFO:tensorflow: + DetectionBoxes_Recall/AR@100: 0.325760
I1022 10:32:56.524399 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100: 0.325760
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (small): 0.000000
I1022 10:32:56.525602 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (medium): 0.077108
I1022 10:32:56.527001 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (medium): 0.077108
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (large): 0.366426
I1022 10:32:56.528494 139923825096576 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (large): 0.366426
INFO:tensorflow: + Loss/localization_loss: 0.286287
I1022 10:32:56.529662 139923825096576 model_lib_v2.py:1018] + Loss/localization_loss: 0.286287
INFO:tensorflow: + Loss/classification_loss: 0.400071
I1022 10:32:56.530766 139923825096576 model_lib_v2.py:1018] + Loss/classification_loss: 0.400071
INFO:tensorflow: + Loss/regularization_loss: 0.202867
I1022 10:32:56.531869 139923825096576 model_lib_v2.py:1018] + Loss/regularization_loss: 0.202867
INFO:tensorflow: + Loss/total_loss: 0.889225
I1022 10:32:56.532963 139923825096576 model_lib_v2.py:1018] + Loss/total_loss: 0.889225
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1
I1022 10:36:09.435534 139923825096576 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_resnet101_v1
INFO:tensorflow:Timed-out waiting for a checkpoint.
I1022 11:36:08.564286 139923825096576 checkpoint_utils.py:205] Timed-out waiting for a checkpoint.
!{command3}##evaluating model ssd_mobilenet_v2_fpnlite
2022-10-22 11:36:12.626666: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 11:36:13.407648: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 11:36:13.407762: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2022-10-22 11:36:13.407782: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
W1022 11:36:15.666329 139755609384832 model_lib_v2.py:1090] Forced number of epochs for all eval validations to be 1.
INFO:tensorflow:Maybe overwriting sample_1_of_n_eval_examples: None
I1022 11:36:15.666553 139755609384832 config_util.py:552] Maybe overwriting sample_1_of_n_eval_examples: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1022 11:36:15.666643 139755609384832 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Maybe overwriting eval_num_epochs: 1
I1022 11:36:15.666728 139755609384832 config_util.py:552] Maybe overwriting eval_num_epochs: 1
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
W1022 11:36:15.666835 139755609384832 model_lib_v2.py:1110] Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
2022-10-22 11:36:16.472007: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
I1022 11:36:16.520730 139755609384832 dataset_builder.py:162] Reading unweighted datasets: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
I1022 11:36:16.520965 139755609384832 dataset_builder.py:79] Reading record datasets for input file: ['Tensorflow/workspace/annotations/test.record']
INFO:tensorflow:Number of filenames to read: 1
I1022 11:36:16.521100 139755609384832 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1022 11:36:16.521189 139755609384832 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W1022 11:36:16.524707 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:104: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1022 11:36:16.541275 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/object_detection/builders/dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1022 11:36:20.282835 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 11:36:21.356650 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite
I1022 11:36:23.790139 139755609384832 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite
INFO:tensorflow:Found new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/ckpt-6
I1022 11:36:23.791151 139755609384832 checkpoint_utils.py:151] Found new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/ckpt-6
/usr/local/lib/python3.7/dist-packages/keras/backend.py:452: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
"`tf.keras.backend.set_learning_phase` is deprecated and "
Corrupt JPEG data: premature end of data segment
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1022 11:36:48.169528 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1176: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Finished eval step 0
I1022 11:36:48.196060 139755609384832 model_lib_v2.py:966] Finished eval step 0
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
W1022 11:36:48.574201 139755609384832 deprecation.py:356] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
INFO:tensorflow:Performing evaluation on 36 images.
I1022 11:37:59.200906 139755609384832 coco_evaluation.py:293] Performing evaluation on 36 images.
creating index...
index created!
INFO:tensorflow:Loading and preparing annotation results...
I1022 11:37:59.202660 139755609384832 coco_tools.py:116] Loading and preparing annotation results...
INFO:tensorflow:DONE (t=0.00s)
I1022 11:37:59.204531 139755609384832 coco_tools.py:138] DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=6.96s).
Accumulating evaluation results...
DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.233
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.429
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.230
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.032
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.268
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.049
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.370
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.040
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.424
INFO:tensorflow:Eval metrics at step 5000
I1022 11:38:06.219757 139755609384832 model_lib_v2.py:1015] Eval metrics at step 5000
INFO:tensorflow: + DetectionBoxes_Precision/mAP: 0.233455
I1022 11:38:06.243093 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP: 0.233455
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.50IOU: 0.428991
I1022 11:38:06.244912 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.50IOU: 0.428991
INFO:tensorflow: + DetectionBoxes_Precision/mAP@.75IOU: 0.229563
I1022 11:38:06.246533 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP@.75IOU: 0.229563
INFO:tensorflow: + DetectionBoxes_Precision/mAP (small): 0.000000
I1022 11:38:06.248064 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Precision/mAP (medium): 0.032379
I1022 11:38:06.249546 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (medium): 0.032379
INFO:tensorflow: + DetectionBoxes_Precision/mAP (large): 0.267660
I1022 11:38:06.251029 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (large): 0.267660
INFO:tensorflow: + DetectionBoxes_Recall/AR@1: 0.004955
I1022 11:38:06.252563 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@1: 0.004955
INFO:tensorflow: + DetectionBoxes_Recall/AR@10: 0.049352
I1022 11:38:06.254050 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@10: 0.049352
INFO:tensorflow: + DetectionBoxes_Recall/AR@100: 0.370327
I1022 11:38:06.255517 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100: 0.370327
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (small): 0.000000
I1022 11:38:06.256740 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (small): 0.000000
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (medium): 0.039759
I1022 11:38:06.258131 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (medium): 0.039759
INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (large): 0.424369
I1022 11:38:06.259571 139755609384832 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (large): 0.424369
INFO:tensorflow: + Loss/localization_loss: 0.256235
I1022 11:38:06.260674 139755609384832 model_lib_v2.py:1018] + Loss/localization_loss: 0.256235
INFO:tensorflow: + Loss/classification_loss: 0.361994
I1022 11:38:06.261816 139755609384832 model_lib_v2.py:1018] + Loss/classification_loss: 0.361994
INFO:tensorflow: + Loss/regularization_loss: 0.128807
I1022 11:38:06.262946 139755609384832 model_lib_v2.py:1018] + Loss/regularization_loss: 0.128807
INFO:tensorflow: + Loss/total_loss: 0.747035
I1022 11:38:06.264039 139755609384832 model_lib_v2.py:1018] + Loss/total_loss: 0.747035
INFO:tensorflow:Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite
I1022 11:41:23.873974 139755609384832 checkpoint_utils.py:142] Waiting for new checkpoint at Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 89, in main
wait_interval=300, timeout=FLAGS.eval_timeout)
File "/usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py", line 1136, in eval_continuously
checkpoint_dir, timeout=timeout, min_interval_secs=wait_interval):
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 201, in checkpoints_iterator
checkpoint_dir, checkpoint_path, timeout=timeout)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 149, in wait_for_new_checkpoint
time.sleep(seconds_to_sleep)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 114, in <module>
tf.compat.v1.app.run()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/platform/app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 316, in run
if isinstance(exc, SystemExit) and not exc.code:
KeyboardInterrupt
import os
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
from object_detection.utils import config_util
# Prevent GPU complete consumption
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5120)])
except RunTimeError as e:
print(e)
# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG'])
detection_model = model_builder.build(model_config=configs['model'], is_training=False)
configs2 = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG2'])
detection_model2 = model_builder.build(model_config=configs2['model'], is_training=False)
configs3 = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG3'])
detection_model3 = model_builder.build(model_config=configs3['model'], is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(paths['CHECKPOINT_PATH'], 'ckpt-6')).expect_partial()
ckpt.restore(os.path.join(paths['CHECKPOINT_PATH2'], 'ckpt-6')).expect_partial()
ckpt.restore(os.path.join(paths['CHECKPOINT_PATH3'], 'ckpt-6')).expect_partial()
@tf.function
def detect_fn(image):
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections
def detect_fn2(image):
image, shapes = detection_model2.preprocess(image)
prediction_dict2 = detection_model2.predict(image, shapes)
detections2 = detection_model2.postprocess(prediction_dict2, shapes)
return detections2
def detect_fn3(image):
image, shapes = detection_model3.preprocess(image)
prediction_dict3 = detection_model3.predict(image, shapes)
detections3 = detection_model3.postprocess(prediction_dict3, shapes)
return detections3
FREEZE_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'exporter_main_v2.py ')
command = "python {} --input_type=image_tensor --pipeline_config_path={} --trained_checkpoint_dir={} --output_directory={}".format(FREEZE_SCRIPT ,files['PIPELINE_CONFIG'], paths['CHECKPOINT_PATH'], paths['OUTPUT_PATH'])
command2 = "python {} --input_type=image_tensor --pipeline_config_path={} --trained_checkpoint_dir={} --output_directory={}".format(FREEZE_SCRIPT ,files['PIPELINE_CONFIG2'], paths['CHECKPOINT_PATH2'], paths['OUTPUT_PATH2'])
command3 = "python {} --input_type=image_tensor --pipeline_config_path={} --trained_checkpoint_dir={} --output_directory={}".format(FREEZE_SCRIPT ,files['PIPELINE_CONFIG3'], paths['CHECKPOINT_PATH3'], paths['OUTPUT_PATH3'])
print(command)
print(command2)
print(command3)
!{command}
!{command2}
!{command3}
!zip -r '/content/drive/MyDrive/All_ssd_mobilenet_v1.zip' '/content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/'
!zip -r '/content/drive/MyDrive/All_ssd_resnet101_v1.zip' '/content/Tensorflow/workspace/models/ssd_resnet101_v1/export/'
!zip -r '/content/drive/MyDrive/All_ssd_mobilenet_v2_fpnlite.zip' '/content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/'
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
!unzip '/content/drive/MyDrive/models/ssd_mobileNet_v1.zip'
!unzip '/content/drive/MyDrive/models/ssd_mobilenet_v2_fpnlite.zip'
!unzip '/content/drive/MyDrive/models/ssd_resnet101_v1_fpn.zip'
Archive: /content/drive/MyDrive/models/ssd_mobileNet_v1.zip creating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/ creating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/saved_model.pb creating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/assets/ creating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/variables/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/variables/variables.index inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model/variables/variables.data-00000-of-00001 creating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/checkpoint/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/checkpoint/ckpt-0.index inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/checkpoint/checkpoint inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/checkpoint/ckpt-0.data-00000-of-00001 inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/pipeline.config Archive: /content/drive/MyDrive/models/ssd_mobilenet_v2_fpnlite.zip creating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/ creating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/checkpoint/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/checkpoint/checkpoint inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/checkpoint/ckpt-0.data-00000-of-00001 inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/checkpoint/ckpt-0.index inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/pipeline.config creating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/saved_model.pb creating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/assets/ creating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/variables/ inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/variables/variables.data-00000-of-00001 inflating: content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model/variables/variables.index Archive: /content/drive/MyDrive/models/ssd_resnet101_v1_fpn.zip creating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/ inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/pipeline.config creating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/checkpoint/ inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/checkpoint/checkpoint inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/checkpoint/ckpt-0.index inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/checkpoint/ckpt-0.data-00000-of-00001 creating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/ inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/saved_model.pb creating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/variables/ inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/variables/variables.data-00000-of-00001 inflating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/variables/variables.index creating: content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model/assets/
# LOAD THE MODEL
#MobileNet MODEL Directory
MOBILENET_V1_PATH_TO_SAVED_MODEL = "/content/content/Tensorflow/workspace/models/ssd_mobilenet_v1/export/saved_model"
#MobileNet MODEL Directory
MOBILENET_V2_FPNLITE_PATH_TO_SAVED_MODEL = "/content/content/Tensorflow/workspace/models/ssd_mobilenet_v2_fpnlite/export/saved_model"
#MobileNet MODEL Directory
RESNET_V2_PATH_TO_SAVED_MODEL = "/content/content/Tensorflow/workspace/models/ssd_resnet101_v1/export/saved_model"
"""
Object Detection (On Image) From TF2 Saved Model
=====================================
"""
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # Suppress TensorFlow logging (1)
import pathlib
import tensorflow as tf
import cv2
import argparse
from google.colab.patches import cv2_imshow
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# Enable GPU dynamic memory allocation
# gpus = tf.config.experimental.list_physical_devices('GPU')
# for gpu in gpus:
# tf.config.experimental.set_memory_growth(gpu, True)
# PROVIDE PATH TO IMAGE DIRECTORY
IMAGE_PATHS = '/content/inventory_images/test/test_10.jpg'
# PROVIDE PATH TO LABEL MAP
PATH_TO_LABELS = '/content/label_map.pbtxt'
# PROVIDE THE MINIMUM CONFIDENCE THRESHOLD
MIN_CONF_THRESH = float(0.60)
# LOAD THE MODEL
import time
print('Loading model...', end='')
start_time = time.time()
# LOAD SAVED MODEL AND BUILD DETECTION FUNCTION
mobilenet_v1_detect_fn = tf.saved_model.load(MOBILENET_V1_PATH_TO_SAVED_MODEL)
mobilenet_v2_fpnlite_detect_fn = tf.saved_model.load(MOBILENET_V2_FPNLITE_PATH_TO_SAVED_MODEL)
resnet_v2_detect_fn = tf.saved_model.load(RESNET_V2_PATH_TO_SAVED_MODEL)
end_time = time.time()
elapsed_time = end_time - start_time
print('Done! Took {} seconds'.format(elapsed_time))
Loading model...Done! Took 40.347718238830566 seconds
# PROVIDE PATH TO IMAGE DIRECTORY
IMAGE_PATHS = "/content/inventory_images/test/test_10.jpg"
We have investigated the performance of current state-of-the-art object detection algorithms on the SKU-110k dataset. The idea is to draw an analysis that explains how well object detection algorithms can perform under harsh conditions. We employed SSD_Mobilenet_V2_Fpnlite, SSD ResNet101 v1 FPN, SSD_Mobilenet_v1_FPN to benchmark their performance on the SKU- 110K dataset. We have leveraged the capabilities of transfer learning in our experiments. All the object detection networks are incorporated with a backbone of ResNet50 pre-trained on COCO dataset. We fine-tuned all the models for 5000 epochs and used Adam as an optimizer. We resized images to 640 × 640 during the training and testing phases.
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
return np.array(Image.open(path))
print('Running inference for {}... '.format(IMAGE_PATHS), end='')
image = cv2.imread(IMAGE_PATHS)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image_expanded = np.expand_dims(image_rgb, axis=0)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis, ...]
# detection for mobileNetv1fpn
mobilenet_v1_detections = mobilenet_v1_detect_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(mobilenet_v1_detections.pop('num_detections'))
mobilenet_v1_detections = {key: value[0, :num_detections].numpy()
for key, value in mobilenet_v1_detections.items()}
mobilenet_v1_detections['num_detections'] = num_detections
# detection_classes should be ints.
mobilenet_v1_detections['detection_classes'] = mobilenet_v1_detections['detection_classes'].astype(np.int64)
mobilenet_v1_image_with_detections = image.copy()
vis_util.visualize_boxes_and_labels_on_image_array(
mobilenet_v1_image_with_detections,
mobilenet_v1_detections['detection_boxes'],
mobilenet_v1_detections['detection_classes'],
mobilenet_v1_detections['detection_scores'],
{'object':1},
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.1,
agnostic_mode=False)
#detection via resnet_v2
resnet_v2_detections = resnet_v2_detect_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(resnet_v2_detections.pop('num_detections'))
resnet_v2_detections = {key: value[0, :num_detections].numpy()
for key, value in resnet_v2_detections.items()}
resnet_v2_detections['num_detections'] = num_detections
# detection_classes should be ints.
resnet_v2_detections['detection_classes'] = resnet_v2_detections['detection_classes'].astype(np.int64)
resnet_v2_image_with_detections = image.copy()
vis_util.visualize_boxes_and_labels_on_image_array(
resnet_v2_image_with_detections,
resnet_v2_detections['detection_boxes'],
resnet_v2_detections['detection_classes'],
resnet_v2_detections['detection_scores'],
{'object':1},
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.1,
agnostic_mode=False)
#detection via fpnlite
mobilenet_v2_fpnlite_detections = mobilenet_v2_fpnlite_detect_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(mobilenet_v2_fpnlite_detections.pop('num_detections'))
mobilenet_v2_fpnlite_detections = {key: value[0, :num_detections].numpy()
for key, value in mobilenet_v2_fpnlite_detections.items()}
mobilenet_v2_fpnlite_detections['num_detections'] = num_detections
# detection_classes should be ints.
mobilenet_v2_fpnlite_detections['detection_classes'] = mobilenet_v2_fpnlite_detections['detection_classes'].astype(np.int64)
mobilenet_v2_fpnlite_image_with_detections = image.copy()
vis_util.visualize_boxes_and_labels_on_image_array(
mobilenet_v2_fpnlite_image_with_detections,
mobilenet_v2_fpnlite_detections['detection_boxes'],
mobilenet_v2_fpnlite_detections['detection_classes'],
mobilenet_v2_fpnlite_detections['detection_scores'],
{'object':1},
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.1,
agnostic_mode=False)
print("MobileNet_v1")
cv2_imshow(cv2.cvtColor(mobilenet_v1_image_with_detections, cv2.COLOR_BGR2RGB))
print("ResNet_v2")
cv2_imshow(cv2.cvtColor(resnet_v2_image_with_detections, cv2.COLOR_BGR2RGB))
print("MobileNet_v2_FPNLite")
cv2_imshow(cv2.cvtColor(mobilenet_v2_fpnlite_image_with_detections, cv2.COLOR_BGR2RGB))
Running inference for /content/inventory_images/test/test_10.jpg... MobileNet_v1